Controllability, Multiplexing, and Transfer Learning in Networks using Evolutionary Learning

11/14/2018
by   Rise Ooi, et al.
28

Networks are fundamental building blocks for representing data, and computations. Remarkable progress in learning in structurally defined (shallow or deep) networks has recently been achieved. Here we introduce evolutionary exploratory search and learning method of topologically flexible networks under the constraint of producing elementary computational steady-state input-output operations. Our results include; (1) the identification of networks, over four orders of magnitude, implementing computation of steady-state input-output functions, such as a band-pass filter, a threshold function, and an inverse band-pass function. Next, (2) the learned networks are technically controllable as only a small number of driver nodes are required to move the system to a new state. Furthermore, we find that the fraction of required driver nodes is constant during evolutionary learning, suggesting a stable system design. (3), our framework allows multiplexing of different computations using the same network. For example, using a binary representation of the inputs, the network can readily compute three different input-output functions. Finally, (4) the proposed evolutionary learning demonstrates transfer learning. If the system learns one function A, then learning B requires on average less number of steps as compared to learning B from tabula rasa. We conclude that the constrained evolutionary learning produces large robust controllable circuits, capable of multiplexing and transfer learning. Our study suggests that network-based computations of steady-state functions, representing either cellular modules of cell-to-cell communication networks or internal molecular circuits communicating within a cell, could be a powerful model for biologically inspired computing. This complements conceptualizations such as attractor based models, or reservoir computing.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2023

Sampling weights of deep neural networks

We introduce a probability distribution, combined with an efficient samp...
research
06/17/2022

A Survey on Computational Intelligence-based Transfer Learning

The goal of transfer learning (TL) is providing a framework for exploiti...
research
02/11/2021

Real-Time Topology Optimization in 3D via Deep Transfer Learning

The published literature on topology optimization has exploded over the ...
research
08/31/2020

The Computational Capacity of Memristor Reservoirs

Reservoir computing is a machine learning paradigm in which a high-dimen...
research
12/16/2021

Predicting Shallow Water Dynamics using Echo-State Networks with Transfer Learning

In this paper we demonstrate that reservoir computing can be used to lea...
research
03/03/2020

A Metric for Evaluating Neural Input Representation in Supervised Learning Networks

Supervised learning has long been attributed to several feed-forward neu...
research
07/09/2018

Learning Functions in Large Networks requires Modularity and produces Multi-Agent Dynamics

Networks are abundant in biological systems. Small sized over-represente...

Please sign up or login with your details

Forgot password? Click here to reset