Stateful Premise Selection by Recurrent Neural Networks

03/11/2020
by   Bartosz Piotrowski, et al.
0

In this work, we develop a new learning-based method for selecting facts (premises) when proving new goals over large formal libraries. Unlike previous methods that choose sets of facts independently of each other by their rank, the new method uses the notion of state that is updated each time a choice of a fact is made. Our stateful architecture is based on recurrent neural networks which have been recently very successful in stateful tasks such as language translation. The new method is combined with data augmentation techniques, evaluated in several ways on a standard large-theory benchmark, and compared to state-of-the-art premise approach based on gradient boosted trees. It is shown to perform significantly better and to solve many new problems.

READ FULL TEXT
research
05/20/2019

Guiding Theorem Proving by Recurrent Neural Networks

We describe two theorem proving tasks -- premise selection and internal ...
research
12/16/2016

Neural Networks Classifier for Data Selection in Statistical Machine Translation

We address the data selection problem in statistical machine translation...
research
01/14/2016

Improved Relation Classification by Deep Recurrent Neural Networks with Data Augmentation

Nowadays, neural networks play an important role in the task of relation...
research
09/06/2020

Romanian Diacritics Restoration Using Recurrent Neural Networks

Diacritics restoration is a mandatory step for adequately processing Rom...
research
06/17/2021

Joining datasets via data augmentation in the label space for neural networks

Most, if not all, modern deep learning systems restrict themselves to a ...
research
06/19/2019

Predicting Confusion from Eye-Tracking Data with Recurrent Neural Networks

Encouraged by the success of deep learning in a variety of domains, we i...

Please sign up or login with your details

Forgot password? Click here to reset