A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL

05/18/2019
by   Yonghae Kim, et al.
0

The sequence-to-sequence (seq2seq) model for neural machine translation has significantly improved the accuracy of language translation. There have been new efforts to use this seq2seq model for program language translation or program comparisons. In this work, we present the detailed steps of using a seq2seq model to translate CUDA programs to OpenCL programs, which both have very similar programming styles. Our work shows (i) a training input set generation method, (ii) pre/post processing, and (iii) a case study using Polybench-gpu-1.0, NVIDIA SDK, and Rodinia benchmarks.

READ FULL TEXT
research
02/12/2017

Learning to Parse and Translate Improves Neural Machine Translation

There has been relatively little attention to incorporating linguistic p...
research
01/27/2023

Alien Coding

We introduce a self-learning algorithm for synthesizing programs for OEI...
research
10/10/2018

Exploring the Use of Attention within an Neural Machine Translation Decoder States to Translate Idioms

Idioms pose problems to almost all Machine Translation systems. This typ...
research
03/05/2017

Neural Machine Translation and Sequence-to-sequence Models: A Tutorial

This tutorial introduces a new and powerful set of techniques variously ...
research
11/03/2019

Controlling Text Complexity in Neural Machine Translation

This work introduces a machine translation task where the output is aime...
research
03/05/2020

An Empirical Accuracy Law for Sequential Machine Translation: the Case of Google Translate

We have established, through empirical testing, a law that relates the n...
research
03/27/2023

Linguistically Informed ChatGPT Prompts to Enhance Japanese-Chinese Machine Translation: A Case Study on Attributive Clauses

In the field of Japanese-Chinese translation linguistics, the issue of c...

Please sign up or login with your details

Forgot password? Click here to reset