Self-attention based end-to-end Hindi-English Neural Machine Translation

09/21/2019
by   Siddhant Srivastava, et al.
0

Machine Translation (MT) is a zone of concentrate in Natural Language processing which manages the programmed interpretation of human language, starting with one language then onto the next by the PC. Having a rich research history spreading over about three decades, Machine interpretation is a standout amongst the most looked for after region of research in the computational linguistics network. As a piece of this current ace's proposal, the fundamental center examines the Deep-learning based strategies that have gained critical ground as of late and turning into the de facto strategy in MT. We would like to point out the recent advances that have been put forward in the field of Neural Translation models, different domains under which NMT has replaced conventional SMT models and would also like to mention future avenues in the field. Consequently, we propose an end-to-end self-attention transformer network for Neural Machine Translation, trained on Hindi-English parallel corpus and compare the model's efficiency with other state of art models like encoder-decoder and attention-based encoder-decoder neural models on the basis of BLEU. We conclude this paper with a comparative analysis of the three proposed models.

READ FULL TEXT

page 3

page 10

research
12/11/2018

Machine Translation : From Statistical to modern Deep-learning practices

Machine translation (MT) is an area of study in Natural Language process...
research
12/04/2019

Neural Machine Translation: A Review

The field of machine translation (MT), the automatic translation of writ...
research
08/03/2019

Invariance-based Adversarial Attack on Neural Machine Translation Systems

Recently, NLP models have been shown to be susceptible to adversarial at...
research
02/01/2022

Natural Language to Code Using Transformers

We tackle the problem of generating code snippets from natural language ...
research
05/02/2020

Hard-Coded Gaussian Attention for Neural Machine Translation

Recent work has questioned the importance of the Transformer's multi-hea...
research
05/23/2019

Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned

Multi-head self-attention is a key component of the Transformer, a state...
research
09/30/2020

MQTransformer: Multi-Horizon Forecasts with Context Dependent and Feedback-Aware Attention

Recent advances in neural forecasting have produced major improvements i...

Please sign up or login with your details

Forgot password? Click here to reset