Learning-assisted Theorem Proving with Millions of Lemmas

02/11/2014
by   Cezary Kaliszyk, et al.
0

Large formal mathematical libraries consist of millions of atomic inference steps that give rise to a corresponding number of proved statements (lemmas). Analogously to the informal mathematical practice, only a tiny fraction of such statements is named and re-used in later proofs by formal mathematicians. In this work, we suggest and implement criteria defining the estimated usefulness of the HOL Light lemmas for proving further theorems. We use these criteria to mine the large inference graph of the lemmas in the HOL Light and Flyspeck libraries, adding up to millions of the best lemmas to the pool of statements that can be re-used in later proofs. We show that in combination with learning-based relevance filtering, such methods significantly strengthen automated theorem proving of new conjectures over large formal mathematical libraries such as Flyspeck.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2023

Verifying Quantum Phase Estimation (QPE) using Prove-It

The general-purpose interactive theorem-proving assistant called Prove-I...
research
02/11/2014

Machine Learner for Automated Reasoning 0.4 and 0.5

Machine Learner for Automated Reasoning (MaLARea) is a learning and reas...
research
08/31/2021

MiniF2F: a cross-system benchmark for formal Olympiad-level mathematics

We present miniF2F, a dataset of formal Olympiad-level mathematics probl...
research
08/01/2023

Top-down Automated Theorem Proving (Notes for Sir Timothy)

We describe a "top down" approach for automated theorem proving (ATP). R...
research
12/05/2017

Alignment-based Translations Across Formal Systems Using Interface Theories

Translating expressions between different logics and theorem provers is ...
research
09/27/2022

Structure in Theorem Proving: Analyzing and Improving the Isabelle Archive of Formal Proofs

The Isabelle Archive of Formal Proofs has grown to a significant size in...
research
05/20/2023

Experimental results from applying GPT-4 to an unpublished formal language

Can large language models be used to complete mathematical tasks that ar...

Please sign up or login with your details

Forgot password? Click here to reset