DeepAI AI Chat
Log In Sign Up

Robust Computer Algebra, Theorem Proving, and Oracle AI

by   Gopal P. Sarma, et al.
Emory University

In the context of superintelligent AI systems, the term "oracle" has two meanings. One refers to modular systems queried for domain-specific tasks. Another usage, referring to a class of systems which may be useful for addressing the value alignment and AI control problems, is a superintelligent AI system that only answers questions. The aim of this manuscript is to survey contemporary research problems related to oracles which align with long-term research goals of AI safety. We examine existing question answering systems and argue that their high degree of architectural heterogeneity makes them poor candidates for rigorous analysis as oracles. On the other hand, we identify computer algebra systems (CASs) as being primitive examples of domain-specific oracles for mathematics and argue that efforts to integrate computer algebra systems with theorem provers, systems which have largely been developed independent of one another, provide a concrete set of problems related to the notion of provable safety that has emerged in the AI safety community. We review approaches to interfacing CASs with theorem provers, describe well-defined architectural deficiencies that have been identified with CASs, and suggest possible lines of research and practical software projects for scientists interested in AI safety.


page 1

page 2

page 3

page 4


The Challenge of Value Alignment: from Fairer Algorithms to AI Safety

This paper addresses the question of how to align AI systems with human ...

AI safety: state of the field through quantitative lens

Last decade has seen major improvements in the performance of artificial...

AI Safety Subproblems for Software Engineering Researchers

In this 4-page manuscript we discuss the problem of long-term AI Safety ...

Transdisciplinary AI Observatory – Retrospective Analyses and Future-Oriented Contradistinctions

In the last years, AI safety gained international recognition in the lig...

Safety without alignment

Currently, the dominant paradigm in AI safety is alignment with human va...

Correct-by-Construction Runtime Enforcement in AI – A Survey

Runtime enforcement refers to the theories, techniques, and tools for en...

TheoremQA: A Theorem-driven Question Answering dataset

The recent LLMs like GPT-4 and PaLM-2 have made tremendous progress in s...