Applying Deutsch's concept of good explanations to artificial intelligence and neuroscience – an initial exploration

12/16/2020
by   Dan Elton, et al.
0

Artificial intelligence has made great strides since the deep learning revolution, but AI systems still struggle to extrapolate outside of their training data and adapt to new situations. For inspiration we look to the domain of science, where scientists have been able to develop theories which show remarkable ability to extrapolate and sometimes predict the existence of phenomena which have never been observed before. According to David Deutsch, this type of extrapolation, which he calls "reach", is due to scientific theories being hard to vary. In this work we investigate Deutsch's hard-to-vary principle and how it relates to more formalized principles in deep learning such as the bias-variance trade-off and Occam's razor. We distinguish internal variability, how much a model/theory can be varied internally while still yielding the same predictions, with external variability, which is how much a model must be varied to accurately predict new, out-of-distribution data. We discuss how to measure internal variability using the size of the Rashomon set and how to measure external variability using Kolmogorov complexity. We explore what role hard-to-vary explanations play in intelligence by looking at the human brain and distinguish two learning systems in the brain. The first system operates similar to deep learning and likely underlies most of perception and motor control while the second is a more creative system capable of generating hard-to-vary explanations of the world. We argue that figuring out how replicate this second system, which is capable of generating hard-to-vary explanations, is a key challenge which needs to be solved in order to realize artificial general intelligence. We make contact with the framework of Popperian epistemology which rejects induction and asserts that knowledge generation is an evolutionary process which proceeds through conjecture and refutation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2021

Projection: A Mechanism for Human-like Reasoning in Artificial Intelligence

Artificial Intelligence systems cannot yet match human abilities to appl...
research
09/01/2020

Learning explanations that are hard to vary

In this paper, we investigate the principle that `good explanations are ...
research
09/11/2020

To Root Artificial Intelligence Deeply in Basic Science for a New Generation of AI

One of the ambitions of artificial intelligence is to root artificial in...
research
01/21/2020

Deceptive AI Explanations: Creation and Detection

Artificial intelligence comes with great opportunities and but also grea...
research
05/19/2023

Artificial intelligence moral agent as Adam Smith's impartial spectator

Adam Smith developed a version of moral philosophy where better decision...
research
02/22/2023

Stress and Adaptation: Applying Anna Karenina Principle in Deep Learning for Image Classification

Image classification with deep neural networks has reached state-of-art ...
research
07/17/2022

Introducing RISK

This extended abstract introduces the initial steps taken to develop a s...

Please sign up or login with your details

Forgot password? Click here to reset