Exploring Software Naturalness throughNeural Language Models

06/22/2020
by   Luca Buratti, et al.
0

The Software Naturalness hypothesis argues that programming languages can be understood through the same techniques used in natural language processing. We explore this hypothesis through the use of a pre-trained transformer-based language model to perform code analysis tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/22/2020

Exploring Software Naturalness through Neural Language Models

The Software Naturalness hypothesis argues that programming languages ca...
research
05/03/2022

Neural language models for network configuration: Opportunities and reality check

Boosted by deep learning, natural language processing (NLP) techniques h...
research
03/04/2022

Deep Lexical Hypothesis: Identifying personality structure in natural language

Recent advances in natural language processing (NLP) have produced gener...
research
08/18/2023

Exploring Sampling Techniques for Generating Melodies with a Transformer Language Model

Research in natural language processing has demonstrated that the qualit...
research
09/02/2023

LinkTransformer: A Unified Package for Record Linkage with Transformer Language Models

Linking information across sources is fundamental to a variety of analys...
research
12/13/2021

Language Models are not Models of Language

Natural Language Processing (NLP) has become one of the leading applicat...
research
09/13/2022

Exploring Code Style Transfer with Neural Networks

Style is a significant component of natural language text, reflecting a ...

Please sign up or login with your details

Forgot password? Click here to reset