Robust Computer Algebra, Theorem Proving, and Oracle AI

08/08/2017
by   Gopal P. Sarma, et al.
0

In the context of superintelligent AI systems, the term "oracle" has two meanings. One refers to modular systems queried for domain-specific tasks. Another usage, referring to a class of systems which may be useful for addressing the value alignment and AI control problems, is a superintelligent AI system that only answers questions. The aim of this manuscript is to survey contemporary research problems related to oracles which align with long-term research goals of AI safety. We examine existing question answering systems and argue that their high degree of architectural heterogeneity makes them poor candidates for rigorous analysis as oracles. On the other hand, we identify computer algebra systems (CASs) as being primitive examples of domain-specific oracles for mathematics and argue that efforts to integrate computer algebra systems with theorem provers, systems which have largely been developed independent of one another, provide a concrete set of problems related to the notion of provable safety that has emerged in the AI safety community. We review approaches to interfacing CASs with theorem provers, describe well-defined architectural deficiencies that have been identified with CASs, and suggest possible lines of research and practical software projects for scientists interested in AI safety.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/15/2021

The Challenge of Value Alignment: from Fairer Algorithms to AI Safety

This paper addresses the question of how to align AI systems with human ...
research
02/12/2020

AI safety: state of the field through quantitative lens

Last decade has seen major improvements in the performance of artificial...
research
04/28/2023

AI Safety Subproblems for Software Engineering Researchers

In this 4-page manuscript we discuss the problem of long-term AI Safety ...
research
11/26/2020

Transdisciplinary AI Observatory – Retrospective Analyses and Future-Oriented Contradistinctions

In the last years, AI safety gained international recognition in the lig...
research
02/27/2023

Safety without alignment

Currently, the dominant paradigm in AI safety is alignment with human va...
research
08/30/2022

Correct-by-Construction Runtime Enforcement in AI – A Survey

Runtime enforcement refers to the theories, techniques, and tools for en...
research
05/21/2023

TheoremQA: A Theorem-driven Question Answering dataset

The recent LLMs like GPT-4 and PaLM-2 have made tremendous progress in s...

Please sign up or login with your details

Forgot password? Click here to reset