The structure of evolved representations across different substrates for artificial intelligence

04/05/2018
by   Arend Hintze, et al.
0

Artificial neural networks (ANNs), while exceptionally useful for classification, are vulnerable to misdirection. Small amounts of noise can significantly affect their ability to correctly complete a task. Instead of generalizing concepts, ANNs seem to focus on surface statistical regularities in a given task. Here we compare how recurrent artificial neural networks, long short-term memory units, and Markov Brains sense and remember their environments. We show that information in Markov Brains is localized and sparsely distributed, while the other neural network substrates "smear" information about the environment across all nodes, which makes them vulnerable to noise.

READ FULL TEXT

page 3

page 6

research
04/23/2021

A modularity comparison of Long Short-Term Memory and Morphognosis neural networks

This study compares the modularity performance of two artificial neural ...
research
04/21/2021

Aedes-AI: Neural Network Models of Mosquito Abundance

We present artificial neural networks as a feasible replacement for a me...
research
11/06/2017

Learning Solving Procedure for Artificial Neural Network

It is expected that progress toward true artificial intelligence will be...
research
09/17/2017

Markov Brains: A Technical Introduction

Markov Brains are a class of evolvable artificial neural networks (ANN)....
research
05/17/2022

Need is All You Need: Homeostatic Neural Networks Adapt to Concept Shift

In living organisms, homeostasis is the natural regulation of internal s...
research
03/05/2023

On Modifying a Neural Network's Perception

Artificial neural networks have proven to be extremely useful models tha...
research
05/30/2017

A Tale of Two Animats: What does it take to have goals?

What does it take for a system, biological or not, to have goals? Here, ...

Please sign up or login with your details

Forgot password? Click here to reset