Learning C to x86 Translation: An Experiment in Neural Compilation

08/17/2021
by   Jordi Armengol-Estapé, et al.
0

Deep learning has had a significant impact on many fields. Recently, code-to-code neural models have been used in code translation, code refinement and decompilation. However, the question of whether these models can automate compilation has yet to be investigated. In this work, we explore neural compilation, building and evaluating Transformer models that learn how to produce x86 assembler from C code. Although preliminary results are relatively weak, we make our data, models and code publicly available to encourage further research in this area.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/17/2019

Benchmarking Neural Machine Translation for Southern African Languages

Unlike major Western languages, most African languages are very low-reso...
research
05/22/2020

Character-level Transformer-based Neural Machine Translation

Neural machine translation (NMT) is nowadays commonly applied at the sub...
research
04/28/2020

Recipes for building an open-domain chatbot

Building open-domain chatbots is a challenging area for machine learning...
research
08/26/2022

Generalizability of Code Clone Detection on CodeBERT

Transformer networks such as CodeBERT already achieve outstanding result...
research
08/06/2023

Understanding the Effectiveness of Large Language Models in Code Translation

Code translation aims to convert source code from one programming langua...
research
05/27/2021

Search Spaces for Neural Model Training

While larger neural models are pushing the boundaries of what deep learn...

Please sign up or login with your details

Forgot password? Click here to reset