Neural Network Quine

03/15/2018
by   Oscar Chang, et al.
0

Self-replication is a key aspect of biological life that has been largely overlooked in Artificial Intelligence systems. Here we describe how to build and train self-replicating neural networks. The network replicates itself by learning to output its own weights. The network is designed using a loss function that can be optimized with either gradient-based or non-gradient-based methods. We also describe a method we call regeneration to train the network without explicit optimization, by injecting the network with predictions of its own parameters. The best solution for a self-replicating network was found by alternating between regeneration and optimization steps. Finally, we describe a design for a self-replicating neural network that can solve an auxiliary task such as MNIST image classification. We observe that there is a trade-off between the network's ability to classify images and its ability to replicate, but training is biased towards increasing its specialization at image classification at the expense of replication. This is analogous to the trade-off between reproduction and other tasks observed in nature. We suggest that a self-replication mechanism for artificial intelligence is useful because it introduces the possibility of continual improvement through natural selection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/16/2021

Evolutionary Self-Replication as a Mechanism for Producing Artificial Intelligence

Can reproduction alone in the context of survival produce intelligence i...
research
09/27/2021

Self-Replicating Neural Programs

In this work, a neural network is trained to replicate the code that tra...
research
07/21/2019

Improving Neural Network Classifier using Gradient-based Floating Centroid Method

Floating centroid method (FCM) offers an efficient way to solve a fixed-...
research
07/06/2021

Generalization Error Analysis of Neural networks with Gradient Based Regularization

We study gradient-based regularization methods for neural networks. We m...
research
05/18/2018

A Self-Replication Basis for Designing Complex Agents

In this work, we describe a self-replication-based mechanism for designi...
research
01/19/2021

Initialization Using Perlin Noise for Training Networks with a Limited Amount of Data

We propose a novel network initialization method using Perlin noise for ...
research
06/10/2022

Training Neural Networks using SAT solvers

We propose an algorithm to explore the global optimization method, using...

Please sign up or login with your details

Forgot password? Click here to reset