DeepAI AI Chat
Log In Sign Up

Can Neural Networks Learn Symbolic Rewriting?

by   Bartosz Piotrowski, et al.

This work investigates if the current neural architectures are adequate for learning symbolic rewriting. Two kinds of data sets are proposed for this research – one based on automated proofs and the other being a synthetic set of polynomial terms. The experiments with use of the current neural machine translation models are performed and its results are discussed. Ideas for extending this line of research are proposed and its relevance is motivated.


page 1

page 2

page 3

page 4


Why not be Versatile? Applications of the SGNMT Decoder for Machine Translation

SGNMT is a decoding platform for machine translation which allows paring...

Neural Machine Translation

Draft of textbook chapter on neural machine translation. a comprehensive...

Is Neural Machine Translation Ready for Deployment? A Case Study on 30 Translation Directions

In this paper we provide the largest published comparison of translation...

Sockeye 3: Fast Neural Machine Translation with PyTorch

Sockeye 3 is the latest version of the Sockeye toolkit for Neural Machin...

Exploring Hyper-Parameter Optimization for Neural Machine Translation on GPU Architectures

Neural machine translation (NMT) has been accelerated by deep learning n...

From Gameplay to Symbolic Reasoning: Learning SAT Solver Heuristics in the Style of Alpha(Go) Zero

Despite the recent successes of deep neural networks in various fields s...

Clustering-Based Approaches for Symbolic Knowledge Extraction

Opaque models belonging to the machine learning world are ever more expl...