Natural Language to Code Using Transformers

02/01/2022
by   Uday Kusupati, et al.
0

We tackle the problem of generating code snippets from natural language descriptions using the CoNaLa dataset. We use the self-attention based transformer architecture and show that it performs better than recurrent attention-based encoder decoder. Furthermore, we develop a modified form of back translation and use cycle consistent losses to train the model in an end-to-end fashion. We achieve a BLEU score of 16.99 beating the previously reported baseline of the CoNaLa challenge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2020

SAN-M: Memory Equipped Self-Attention for End-to-End Speech Recognition

End-to-end speech recognition has become popular in recent years, since ...
research
08/02/2020

Efficient Urdu Caption Generation using Attention based LSTMs

Recent advancements in deep learning has created a lot of opportunities ...
research
09/21/2019

Self-attention based end-to-end Hindi-English Neural Machine Translation

Machine Translation (MT) is a zone of concentrate in Natural Language pr...
research
02/18/2023

Transformadores: Fundamentos teoricos y Aplicaciones

Transformers are a neural network architecture originally designed for n...
research
04/20/2018

A Mixed Hierarchical Attention based Encoder-Decoder Approach for Standard Table Summarization

Structured data summarization involves generation of natural language su...
research
08/23/2021

Regularizing Transformers With Deep Probabilistic Layers

Language models (LM) have grown with non-stop in the last decade, from s...
research
09/20/2022

Relaxed Attention for Transformer Models

The powerful modeling capabilities of all-attention-based transformer ar...

Please sign up or login with your details

Forgot password? Click here to reset