DeepMath - Deep Sequence Models for Premise Selection

06/14/2016
by   Alex A. Alemi, et al.
0

We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics. We propose a two stage approach for this task that yields good results for the premise selection task on the Mizar corpus while avoiding the hand-engineered features of existing state-of-the-art models. To our knowledge, this is the first time deep learning has been applied to theorem proving on a large scale.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/07/2020

Generative Language Modeling for Automated Theorem Proving

We explore the application of transformer-based language models to autom...
research
11/29/2016

Semantic Parsing of Mathematics by Context-based Learning from Aligned Corpora and Theorem Proving

We study methods for automated parsing of informal mathematical expressi...
research
03/10/2023

Lemmas: Generation, Selection, Application

Noting that lemmas are a key feature of mathematics, we engage in an inv...
research
02/18/2022

Selection Strategies for Commonsense Knowledge

Selection strategies are broadly used in first-order logic theorem provi...
research
05/20/2019

Guiding Theorem Proving by Recurrent Neural Networks

We describe two theorem proving tasks -- premise selection and internal ...
research
09/28/2017

Premise Selection for Theorem Proving by Deep Graph Embedding

We propose a deep learning-based approach to the problem of premise sele...
research
11/27/2019

Property Invariant Embedding for Automated Reasoning

Automated reasoning and theorem proving have recently become major chall...

Please sign up or login with your details

Forgot password? Click here to reset