DeepAI AI Chat
Log In Sign Up

INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving

by   Yuhuai Wu, et al.

In learning-assisted theorem proving, one of the most critical challenges is to generalize to theorems unlike those seen at training time. In this paper, we introduce INT, an INequality Theorem proving benchmark, specifically designed to test agents' generalization ability. INT is based on a procedure for generating theorems and proofs; this procedure's knobs allow us to measure 6 different types of generalization, each reflecting a distinct challenge characteristic to automated theorem proving. In addition, unlike prior benchmarks for learning-assisted theorem proving, INT provides a lightweight and user-friendly theorem proving environment with fast simulations, conducive to performing learning-based and search-based research. We introduce learning-based baselines and evaluate them across 6 dimensions of generalization with the benchmark. We then evaluate the same agents augmented with Monte Carlo Tree Search (MCTS) at test time, and show that MCTS can help to prove new theorems.


page 6

page 20

page 21


Learning to Prove Theorems by Learning to Generate Theorems

We consider the task of automated theorem proving, a key AI task. Deep l...

LeanDojo: Theorem Proving with Retrieval-Augmented Language Models

Large language models (LLMs) have shown promise in proving formal theore...

Designing Game of Theorems

"Theorem proving is similar to the game of Go. So, we can probably impro...

Object-Oriented Theorem Proving (OOTP): First Thoughts

Automatic (i.e., computer-assisted) theorem proving (ATP) can come in ma...

Gym-saturation: an OpenAI Gym environment for saturation provers

`gym-saturation` is an OpenAI Gym environment for reinforcement learning...

Mechanization of Incidence Projective Geometry in Higher Dimensions, a Combinatorial Approach

Several tools have been developed to enhance automation of theorem provi...