Can Neural Networks Learn Symbolic Rewriting?

11/07/2019
by   Bartosz Piotrowski, et al.
0

This work investigates if the current neural architectures are adequate for learning symbolic rewriting. Two kinds of data sets are proposed for this research – one based on automated proofs and the other being a synthetic set of polynomial terms. The experiments with use of the current neural machine translation models are performed and its results are discussed. Ideas for extending this line of research are proposed and its relevance is motivated.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/20/2018

Why not be Versatile? Applications of the SGNMT Decoder for Machine Translation

SGNMT is a decoding platform for machine translation which allows paring...
research
09/22/2017

Neural Machine Translation

Draft of textbook chapter on neural machine translation. a comprehensive...
research
10/04/2016

Is Neural Machine Translation Ready for Deployment? A Case Study on 30 Translation Directions

In this paper we provide the largest published comparison of translation...
research
07/12/2022

Sockeye 3: Fast Neural Machine Translation with PyTorch

Sockeye 3 is the latest version of the Sockeye toolkit for Neural Machin...
research
05/05/2018

Exploring Hyper-Parameter Optimization for Neural Machine Translation on GPU Architectures

Neural machine translation (NMT) has been accelerated by deep learning n...
research
10/21/2022

Revisiting Checkpoint Averaging for Neural Machine Translation

Checkpoint averaging is a simple and effective method to boost the perfo...
research
11/01/2022

Clustering-Based Approaches for Symbolic Knowledge Extraction

Opaque models belonging to the machine learning world are ever more expl...

Please sign up or login with your details

Forgot password? Click here to reset