DeepAI AI Chat
Log In Sign Up

Generating Mutually Inductive Theorems from Concise Descriptions

by   Sol Swords, et al.

We describe defret-mutual-generate, a utility for proving ACL2 theorems about large mutually recursive cliques of functions. This builds on previous tools such as defret-mutual and make-flag, which automate parts of the process but still require a theorem body to be written out for each function in the clique. For large cliques, this tends to mean that certain common hypotheses and conclusions are repeated many times, making proofs difficult to read, write, and maintain. This utility automates several of the most common patterns that occur in these forms, such as including hypotheses based on formal names or types. Its input language is rich enough to support forms that have some common parts and some unique parts per function. One application of defret-mutual-generate has been to support proofs about the FGL rewriter, which consists of a mutually recursive clique of 49 functions. The use of this utility reduced the size of the forms that express theorems about this clique by an order of magnitude. It also greatly has reduced the need to edit theorem forms when changing definitions in the clique, even when adding or removing functions.


page 1

page 2

page 3

page 4


Deriving Law-Abiding Instances

Liquid Haskell's refinement-reflection feature augments the Haskell lang...

Towards Concise, Machine-discovered Proofs of Gödel's Two Incompleteness Theorems

There is an increasing interest in applying recent advances in AI to aut...

A Generalization of the Borkar-Meyn Theorem for Stochastic Recursive Inclusions

In this paper the stability theorem of Borkar and Meyn is extended to in...

INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving

In learning-assisted theorem proving, one of the most critical challenge...

The Power of Self-Reducibility: Selectivity, Information, and Approximation

This chapter provides a hands-on tutorial on the important technique kno...

Experimental results from applying GPT-4 to an unpublished formal language

Can large language models be used to complete mathematical tasks that ar...