Self-Delimiting Neural Networks

09/29/2012
by   Juergen Schmidhuber, et al.
0

Self-delimiting (SLIM) programs are a central concept of theoretical computer science, particularly algorithmic information & probability theory, and asymptotically optimal program search (AOPS). To apply AOPS to (possibly recurrent) neural networks (NNs), I introduce SLIM NNs. Neurons of a typical SLIM NN have threshold activation functions. During a computational episode, activations are spreading from input neurons through the SLIM NN until the computation activates a special halt neuron. Weights of the NN's used connections define its program. Halting programs form a prefix code. The reset of the initial NN state does not cost more than the latest program execution. Since prefixes of SLIM programs influence their suffixes (weight changes occurring early in an episode influence which weights are considered later), SLIM NN learning algorithms (LAs) should execute weight changes online during activation spreading. This can be achieved by applying AOPS to growing SLIM NNs. To efficiently teach a SLIM NN to solve many tasks, such as correctly classifying many different patterns, or solving many different robot control tasks, each connection keeps a list of tasks it is used for. The lists may be efficiently updated during training. To evaluate the overall effect of currently tested weight changes, a SLIM NN LA needs to re-test performance only on the efficiently computable union of tasks potentially affected by the current weight changes. Future SLIM NNs will be implemented on 3-dimensional brain-like multi-processor hardware. Their LAs will minimize task-specific total wire length of used connections, to encourage efficient solutions of subtasks by subsets of neurons that are physically close. The novel class of SLIM NN LAs is currently being probed in ongoing experiments to be reported in separate papers.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

10/26/2018

Using stigmergy to incorporate the time into artificial neural networks

A current research trend in neurocomputing involves the design of novel ...
02/06/2020

Almost Sure Convergence of Dropout Algorithms for Neural Networks

We investigate the convergence and convergence rate of stochastic traini...
02/18/2021

Recurrent Rational Networks

Latest insights from biology show that intelligence does not only emerge...
09/22/2020

Tensor Programs III: Neural Matrix Laws

In a neural network (NN), weight matrices linearly transform inputs into...
03/09/2015

A Single-Pass Classifier for Categorical Data

This paper describes a new method for classifying a dataset that partiti...
07/02/2020

Persistent Neurons

Most algorithms used in neural networks(NN)-based leaning tasks are stro...
03/31/2022

A unified theory of learning

Recently machine learning using neural networks (NN) has been developed,...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Traditional NNs / Motivation of SLIM NNs / Outline

Recurrent neural networks (RNNs) are neural networks (NNs) [2] with feedback connections. RNNs are, in principle, as powerful as any traditional computer. There is a trivial way of seeing this [51]: a traditional microprocessor can be modeled as a very sparsely connected RNN consisting of simple neurons implementing nonlinear AND and NAND gates. Compare [69] for a more complex argument. RNNs can learn to solve many tasks involving sequences of continually varying inputs. Examples include robot control, speech recognition, music composition, attentive vision, and numerous others. Section 1.1 will give a brief overview of recent NNs and RNNs that achieved extraordinary success in many applications and competitions.

Although RNNs are general computers whose programs are weight matrices, asymptotically optimal program search (AOPS) [41, 66, 58, 60]

has not yet been applied to RNNs. Instead most RNN learning algorithms are based on more or less heuristic search techniques such as gradient descent or evolution (see Section

1.1). One reason for the current lack of AOPS-based RNNs may be that traditional AOPS variants are designed to search a space of sequential self-delimiting (SLIM) programs [42, 3] (Section 1.2). The concept of partially parallel SLIM NNs will help to adapt AOPS to RNNs.

Section 1.3 will mention additional problems addressed by SLIM NNs: (1) Traditional NN implementations based on matrix multiplications may be inefficient for large NN where most weights are rarely used (Section 1.3.1). (2) Traditional NNs use ad hoc ways of avoiding overfitting (Section 1.3.2). (3) Traditional RNNs are not well-suited as increasingly general problem solvers to be trained from scratch by PowerPlay [62], which continually invents the easiest-to-add novel computational problem by itself (Section 1.3.3).

Section 2 will describe essential properties of SLIM NNs; Section 3.2 will show how to apply incremental AOPS to SLIM RNNs.

1.1 Brief Intro to Successful RNNs and Related Deep NNs [61]

Supervised RNNs can be trained to map sequences of input patterns to desired output sequences by gradient descent and other methods [78, 81, 47, 52, 44, 31, 75]. Early RNNs had problems with learning to store relevant events in short-term memory across long time lags  [26]

. Long Short-Term Memory (LSTM) overcame these problems, outperforming early RNNs in many applications 

[27, 14, 15, 65, 20, 21, 23, 22]. While RNNs used to be toy problem methods in the 1990s, they have recently started to beat all other methods in challenging real world applications [63, 65, 11, 21, 22, 23]. Recently, CTC-trained [20] mulitdimensional [23] RNNs won three Connected Handwriting Recognition Competitions at ICDAR 2009 (see below).

Training an RNN by standard methods is as difficult as training a deep feedforward NN (FNN) with many layers [26]

. However, recent deep FNNs with special internal architecture overcome these problems to the extent that they are currently winning many international visual pattern recognition contests

[63, 5, 7, 8, 10, 9]

(see below). None of this requires the traditional sophisticated computer vision techniques developed over the past six decades or so. Instead, those biologically rather plausible NN architectures learn from experience with millions of training examples. Typically they have many non-linear processing stages like Fukushima’s Neocognitron

[13]; they sometimes (but not always) profit from sparse network connectivity and techniques such as weight sharing & convolution [38, 1]

, max-pooling

[49], and contrast enhancement like the one automatically generated by unsupervised Predictability Minimization [53, 64, 67]

. NNs are now often outperforming all other methods including the theoretically less general and less powerful support vector machines (SVMs) based on statistical learning theory

[77] (which for a long time had the upper hand, at least in practice). These results are currently contributing to a second Neural Network ReNNaissance (the first one happened in the 1980s and early 90s) which might not be possible without dramatic advances in computational power per Swiss Franc, obtained in the new millennium. In particular, to implement and train NNs, we exploit graphics processing units (GPUs). GPUs are mini-supercomputers normally used for video games, often 100 times faster than traditional CPUs, and a million times faster than PCs of two decades ago when we started this type of research.

Since 2009, my group’s NN and RNN methods have achieved many first ranks in international competitions: (7) ISBI 2012 Electron Microscopy Stack Segmentation Challenge (with superhuman pixel error rate) [4]. (6) IJCNN 2011 on-site Traffic Sign Recognition Competition (0.56% error rate, the only method better than humans, who achieved 1.16% on average; 3rd place for 1.69%) [9] (5) ICDAR 2011 offline Chinese handwritten character recognition competition [10]. (4) Online German Traffic Sign Recognition Contest (1st & 2nd rank; 1.02% error rate) [8]. (3) ICDAR 2009 Arabic Connected Handwriting Competition (won by LSTM RNNs [22, 23], same below). (2) ICDAR 2009 Handwritten Farsi/Arabic Character Recognition Competition. (1)

ICDAR 2009 French Connected Handwriting Competition. Additional 1st ranks were achieved in important machine learning (ML) benchmarks since 2010:

(A)MNIST handwritten digits data set [38] (perhaps the most famous ML benchmark). New records: 0.35% error in 2010 [5], 0.27% in 2011 [6], first human-competitive performance (0.23%) in 2012 [10]. (B) NORB stereo image data set [39]. New records in 2011, 2012, e.g., [10]. (C)CIFAR-10 image data set [37]. New records in 2011, 2012, e.g., [10].

Reinforcement Learning (RL) [32, 76]

is more challenging than supervised learning as above, since there is no teacher providing desired outputs at appropriate time steps. To solve a given problem, the learning agent itself must discover useful output sequences in response to the observations. The traditional approach to RL

[76]

makes strong assumptions about the environment, such as the Markov assumption: the current input of the agent tells it all it needs to know about the environment. Then all we need to learn is some sort of reactive mapping from stationary inputs to outputs. This is often unrealistic. A more general approach for partially observable environments directly evolves programs for RNNs with internal states (no need for the Markovian assumption), by applying evolutionary algorithms

[45, 68, 28] to RNN weight matrices [82, 70, 72, 25]

. Recent work brought progress through a focus on reducing search spaces by co-evolving the comparatively small weight vectors of individual neurons and synapses 

[19], by Natural Gradient-based Stochastic Search Strategies [80, 73, 74, 48, 17, 79], and by reducing search spaces through weight matrix compression [55, 35]. RL RNNs now outperform many previous methods on benchmarks [19], creating memories of important events and solving numerous tasks unsolvable by classical RL methods.

1.2 Principles of Traditional Sequential SLIM Programs

The RNNs of Section 1.1 are not designed for AOPS. Traditional AOPS favors short and fast programs written in a universal programming language that permits self-delimiting (SLIM) programs [42, 3] studied in the theory of Kolmogorov complexity and algorithmic probability [71, 34, 54, 43, 55, 56, 57, 30]. In fact, SLIM programs are essential for making the theory of algorithmic probability elegant, e.g., [43].

The nice thing about SLIM programs is that they determine their own size during runtime. Traditional sequential

SLIM programs work as follows: Whenever the instruction pointer of a Turing Machine or a traditional PC has been initialized or changed (e.g., through a conditional jump instruction) such that its new value points to an address containing some executable instruction, then the instruction will be executed. This may change the internal storage including the instruction pointer. Once a halt instruction is encountered and executed, the program stops.

Whenever the instruction pointer points to an address that never was used before by the current program and does not yet contain an instruction, this is interpreted as the online request for a new instruction [42, 3] (typically selected by a time-optimal search algorithm [66, 58, 60]). The new instruction is appended to the growing list of used instructions defining the program so far.

Executed program beginnings or prefixes influence their possible suffixes. Code execution determines code size in an online fashion.

Prefixes that halt or at least cease to request any further input instructions are called self-delimiting programs or simply programs. This procedure yields prefix codes on program space. No halting or non-halting program can be the prefix of another one.

Principles of SLIM programs are not implemented by traditional standard RNNs. SLIM RNNs, however, do implement them, making SLIM RNNs highly compatible with time-optimal program search (Section 3.2).

1.3 Additional Problems of Traditional NNs Addressed By Slim NNs

1.3.1 Certain Inefficiencies of Traditional NN Implementations

Typical matrix multiplication-based implementations of the NN algorithms in Section 1.1 always take into consideration all neurons and connections of a given NN, even when most are not even needed to solve a particular task.

The SLIM NNs of the present paper use more efficient ways of information processing and learning. Imagine a large RNN with a trillion connections connecting a billion neurons, each with a thousand outgoing connections to other neurons. If the RNN consists of biologically plausible winner-take-all (WITA) neurons with threshold activation functions [50], also found in networks of spiking neurons [16], a given RNN computation might activate just a tiny fraction of all neurons, and hence never even consider the outgoing connections of most neurons. This simple fact can be exploited to devise classes of NN algorithms that are less costly in various senses, to be detailed below.

1.3.2 Traditional Ad Hoc Ways of Avoiding Overfitting

To avoid overfitting on training sets and to improve generalization on test sets, various pre-wired regularizer terms [2] have been added to performance measures or objective functions of traditional NNs. The idea is to obtain simple NNs by penalizing NN complexity. One problem is the ad hoc weighting of such additional terms. The present paper’s more principled SLIM NNs can learn to actively select in task-specific ways their own size, that is, their effective number of weights ( modifiable free parameters), in line with the theory of algorithmic probability and optimal universal inductive inference [71, 34, 54, 43, 55, 56, 57, 30].

1.3.3 RNNs as Problem Solvers for PowerPlay

The recent unsupervised PowerPlay framework [62] trains an increasingly general problem solver from scratch, continually inventing the easiest-to-add novel computational problem by itself. We will see that unlike traditional RNNs, SLIM RNNs are well-suited as problem solvers to be trained by PowerPlay. In particular, SLIM RNNs support a natural modularization of the space of self-invented and other tasks and their solutions into more or less independent regions. More on this particular motivation of SLIM NNs can be found in Section 4.

2 Self-Delimiting Parallel-Sequential Programs on SLIM RNNs

Unless stated, or otherwise obvious, to simplify notation, throughout the paper newly introduced variables are assumed to be integer-valued and to cover the range implicit in the context. denotes the natural numbers, the real numbers, a positive constant, non-negative integers.

The -th computational unit or neuron of our RNN is denoted (). is the real-valued weight on the directed connection from to . Like the human brain, the RNN may be sparsely connected, that is, each neuron may be connected to just a fraction of the other neurons. To program the RNN means to set some or all of the weights .

At discrete time step of a finite interaction sequence with the environment (an episode), denotes the real-valued activation of . The real-valued input vector (which may include a unique encoding of the current task) has components, where the -th component is denoted ; we define for . That is, the first neurons are input neurons; they do not have incoming connections from other neurons. The current reward signal (if any) is a special real-valued input; we set . For , we set , thus defining the -dimensional output vector , which may affect the environment (e.g., by defining a robot action) and thus future and . For we initialize and for compute (if is user-defined as an additive neuron) or (if is a multiplicative neuron).

Here the function maps to . Many previous RNNs use differentiable activation functions such as , or . We want SLIM NN programs that can easily define their own size. Hence we focus on threshold activation functions that allow for keeping most units inactive most of the time, e.g.: if and 0 otherwise. For the same reason we also consider winner-take-all activation functions. Here all non-input neurons (including output neurons) are partitioned into ordered winner-take-all subsets (WITAS), like in my first RNN from 1989 [50]. Once all of a WITAS are computed as above, and at least one of them exceeds a threshold such as 0.5, and a particular is the first with maximal activation in its WITAS, then we re-define as 1, otherwise as 0.

For each there is a constant cost of using between and , provided . More on this in Section 3.4.

A special, unusual, non-traditional non-input neuron is called the halt neuron . If is active (has non-zero activation) once all updates of time have been completed, we define , and the computation stops. For non-halting programs, might be a maximal time limit to be defined by a learning algorithm based on techniques of asymptotically optimal program search [41, 66, 58, 60]—see Section 3.2.

2.1 Efficient Activation Spreading and NN Resets

Procedure 2.1: Spread

  (see text for global variables and their initialization before the first call of Spread)
  set
  while  threshold do
     get next input vector
     for  do
        ; if append to
     end for
     for all  do
        for all  do
           (*) [If was never used before in the current or any previous episode, a learning algorithm (see Section 3) may set for the first time, thus growing the effectively used SLIM RNN by (and by in case was never used before)]
           if  then
              if then set and append to
              if is additive then
              else is multiplicative and
              if then set and append to
               [long wires may cost more—see Section 3.4]
              (**) if then exit while loop
           end if
        end for
     end for
     for  do
        determine final new activation (either 1 or 0) through thresholding and determination of WITAS winners (if any; see Section 2)
        ; if is additive then else [restore]
     end for
     ; [now cannot contain any input units]
     delete from all with zero
     execute environment-changing actions (if any) based on output neurons; possibly update problem-specific variables needed for an ongoing performance evaluation according to a given problem-specific objective function [2] (see Section 3 on learning); continually add the computational costs of the above to ; once exit while loop
  end while
  for  [perhaps in case of premature exit from (**)] do
     ; if is additive then else [restore]
  end for
  for  do
     
  end for

Procedure Spread (inspired by an earlier RNN implementation [50]) efficiently implements episodes according to the formulae above (see Procedure 2.1). Each is associated with a list of all connections emanating from . A nearly trivial observation is that only neurons with non-zero activation need to be considered for activation spreading. There are three global variable lists (initially empty): , , . Lists and track neurons used in the most recent two time steps, to efficiently proceed from one time step to the next; tracks connections used at least once during the current episode. For each there is a global Boolean variable (initially 0), to mark which RNN neurons already received contributions from the previous step during the current interaction sequence with the environment. For each there is a global Boolean variable (initially 0), to mark which connections were used at least once. The following real-valued variables are initalized by 0 unless indicated otherwise: holds the activation of at the current step, is a temporary variable for collecting contributions from neurons connected to (initialized by 1 if is a multiplicative neuron); holds the current input of if is an input neuron. The integer variable (initially 0) is used to count connection usages; the given time limit will eventually stop episodes that are not halted by the halting unit. The label (*) in Spread will be referred to in Section 3 on learning. Spread’s results include two global variables: the program and its runtime .

Once Spread has finished, weights of connections in are the only used instructions of the SLIM program that just ran on the RNN. We observe: tracking and undoing the effects of a program essentially does not cost more than its execution, because untouched parts of the net are never considered for resets.

Note the difference to most standard NN implementations: the latter use matrix multiplications to multiply entire weight matrices by activation vectors. The simple list-based method Spread, however, ignores all unused neurons and irrelevant connections. In large brain-like sparse nets this by itself may dramatically accelerate information processing.

2.2 Relation to Traditional Self-Delimiting Programs and Prefix Codes

Since the order of neuron activation updates between two successive time steps is irrelevant, such updates can be performed in parallel. That is, SLIM NN code can be executed in partly parallel and partly sequential fashion. Nevertheless, the execution follows basic principles of sequential SLIM programs [42, 3, 43, 56, 57, 30] (Section 1.2).

As mentioned in Section 1.2, the latter form a prefix code on program space. An equivalent condition holds for the s computed by Spread in a resettable deterministic environment, as long as we identify a given with all possible variants (an equivalence class) reflecting the irrelevant order of neuron activation updates between two successive time steps. (In non-resettable environments, the environmental inputs have to be viewed as an additional part of the program to establish such a prefix code condition.) Compare also Section 3.1 on learning-based NN growth and the label (*) in Spread.

3 Principles of Efficient Learning Algorithms (LAs) for SLIM NNs

Through weight changes, the NN is supposed to learn something from a sequence of training task descriptions . Here each unique could identify a pattern classification task or robot control task, where the task description dimensionality is an integer constant, such that (parts of) can be used as a non-changing part of the inputs . The SLIM NN’s performance on each task is measured by some given problem-specific objective function [2].

To efficiently change SLIM NN weights based on experience, any learning algorithm (LA) should ignore all unused weights.

Since prefixes of SLIM programs influence their suffixes, and weights used early in a Spread episode influence which weights are considered later, weight modifications tested by SLIM NN LAs should be generated online during program execution, such that unused weights are not even considered as candidates for change. Search spaces of many well-known LAs (such as hill climbing and neuro-evolution; see Section 1.1) obviously may be greatly reduced by obeying these restrictions.

3.1 LA-Based SLIM NN Growth

Typical SLIM NN LAs (e.g., Section 3.2) will influence how SLIM NNs grow. Consider the bracketed statement in procedure Spread labeled by (*). If some considered here was never used before, and its never defined, a tentative value can be temporarily set here (setting wouldn’t have any effect), and the used part of the net effectively grows by (and in case also was never used before). Later performance evaluations may suggest to make this extended topology permanent and keep as a basis for further changes.

This type of SLIM program-directed NN growth is quite different from previous popular NN growth strategies, e.g., [12].

3.2 (Incremental Adaptive) Universal Search for SLIM NNs

LAs for growing SLIM NNs as in Section 3.1 may be based on techniques of asymptotically optimal program search [41, 66, 58, 60]

. Assume some initial bias in form of probability distributions

on a finite set of possible real-valued values for each . Let denote the number of usages of during Spread. Given some task, one of the simplest LAs based on universal search [41] is the algorithm Universal SLIM NN Search.

Universal SLIM NN Search (Variant 1)

  for  do
     systematically enumerate and test possible programs (as computed by Spread) with runtime until all have been tested, or the most recently tested has solved the task and the solution has been verified; in the latter case exit and return that
  end for

Here the real-valued expression represents all costs other than those of the NN’s connection usages. This includes the costs of output actions and evaluations. may be negligible in many applications though. The left-hand side of the inequality in Universal SLIM NN Search is essentially the computed by Spread.

That is, Universal SLIM NN Search time-shares all program tests such that each program gets not more than a constant fraction of the total search time. This fraction is proportional to its probability. The method is near-bias-optimal [60] and asymptotically optimal in the following sense: If some unknown requires at most steps to solve a problem of a given class and integer size and verify the solution, where is a computable function mapping integers to integers, then the entire search also will need at most steps.

To explore the space of possible s and their computational effects, efficient implementations of Universal SLIM NN Search use depth-first search in program prefix space combined with stack-based backtracking for partial state resets, like in the online source code [59] of the Optimal Ordered Problem Solver OOPS [60].

Traditional NN LAs address overfitting by pre-wired regularizers and hyper-parameters [2] to penalize NN complexity. Universal SLIM NN Search, however, systematically tests programs that essentially select their own task-dependent size (the number of weights modifiable free parameters). It favors SLIM NNs that combine short runtime and simplicity/low descriptive complexity. Note that small size or low description length is equivalent to high probability, since the negative binary logarithm of the probability of some SLIM NN’s is essentially the number of bits needed to encode by Huffman coding [29]. Hence the method has a built-in way of addressing overfitting and boosting generalization performance [54, 55] through a bias towards simple solutions in the sense of Occam’s razor [71, 34, 43, 30].

The method can be extended [66, 60] such that it incrementally solves each problem in an ordered sequence of problems, continually organizing and managing and reusing earlier acquired knowledge. For example, this can be done by updating the probability distributions based on success: once Universal SLIM NN SearchUniversal SLIM NN Search has found a solution to the present problem, some (possibly heuristic) strategy is used to shift the bias by increasing/decreasing the probabilities of weights of connections in before the next invocation of Universal SLIM NN Search on the next problem [66]. Roughly speaking, each doubling of ’s probability halfs the time needed by Universal SLIM NN Search to find .

One of the simplest bias-shifting procedures is Adaptive Universal SLIM NN Search (Variant 1) based on earlier work on sequential programs [66]. It uses a constant learning rate . After a successful episode with a halting program, for each with , let denote how often successive activations and were both 1, and let denote how often was 1 but was 0. Note that . Define . The sign of indicates whether usually helped to trigger or suppress . A Hebb-inspired learning rule uses to change in case of success.

Adaptive Universal SLIM NN Search (Variant 1)

  for  do
     use Universal SLIM NN Search to solve the -th problem by some solution program
     for all satisfying  do
        for all  do
           if  then
               [decrease ]
           else
               [decrease ]
           end if
           for all  do
              normalize: , where constant is chosen to ensure
           end for
        end for
     end for
  end for

To reduce the search space, alternative (adaptive) Universal SLIM NN Search variants do not use independent

but joint distributions

for each to correlate various . For example, the rule may be: exactly one of the connections must have a weight of 1, all others must have -1. (This is inspired by biological brains whose connections are mostly inhibitory.) The initial may assign equal a priori probability to the possible weight vectors (as many as there are connections in ).

Yet additional variants of adaptive universal search for low-complexity networks search in compressed network space [36, 35, 18]. Alternatively, apply the principles of the Optimal Ordered Problem Solver OOPS [60, 58] to SLIM NNs: If a new problem can be solved faster by writing a program that invokes previously found code than by solving the new problem from scratch, then OOPS will find this out.

3.3 Tracking which Connections Affect which Tasks

To efficiently exploit that possibly many weights are not used by many tasks, we keep track of which connections affect which tasks [62]. For each connection a variable list of tasks is introduced. Its initial value before learning is , an empty list.

Let us now assume tentative changes of certain used are computed by an LA embedded within a Spread-like framework (Section 2.1)—compare label (*) in Spread. That is, some of the used weights (but no unused weights!) are modified or generated through an LA, while it is becoming clear during the ongoing activation spreading computation which units and connections are used at all—compare Sections 3.1 and 3.2.

Now note that the union of the corresponding is the list of tasks on which the SLIM NN’s performance may have changed through the weight modifications. All can be safely ignored—performance on those tasks remains unaffected. For we use Spread to re-evaluate performance. If total performance on all has not improved through the tentative weight changes, the latter are undone. Otherwise we keep them, and all affected are updated as follows (using the s computed by Spread): the new value is obtained by appending to those whose current (possibly revised) solutions now need at least once during the solution-computing process, and deleting those whose current solutions do not use any more.

That is, if the most recent task does not require changes of many weights, and if the changed weights do not affect many previous tasks, then validation of learning progress through methods like those of Section 3.2 or similar [62] may be much more efficient than in traditional NN implementations.

3.4 Additional LA Principles for SLIM NNs on Future 3D Hardware

Computers keep getting faster per cost. To continue this trend within the limitations of physics, future hardware architectures will feature 3-dimensional arrangements of numerous connected processors. To minimize wire length and communication costs [40], the processors should communicate through many low-cost short-range and few high-cost long-range connections, much like biological neurons. Given some task, to minimize energy consumption and cooling costs, no more processors or neurons than necessary to solve the task should become active, and those that communicate a lot with each other should typically be physically close.

All of this can be encouraged through LAs that punish excessive processing and communication costs of 3D SLIM NNs running on such hardware.

Consider the constant of using in such a 3D SLIM RNN from one discrete time step to the next in Spread-like procedures. may be viewed as the wire length of [40]. The expression can enter the objective function, e.g., as an additive term to be minimized by an LA like those mentioned in Section 3. Note, however, that such costs are automatically taken into account by the universal program search methods of Section 3.2.

Like biological brains, typical 3D SLIM RNNs will have many more short wires than long ones. An automatic by-product of LAs as in Section 3.2

should be the learning of subtask solutions by subsets of neurons most of which are physically close to each other. The resulting weight matrices may sometimes be reminiscent of self-organizing maps for pattern classification

[33] and motor control [24, 46]. The underlying cause of such neighborhood-preserving weight matrices, however, will not be a traditional pre-wired neighborhood-enforcing learning rule [33, 46], but sheer efficiency per se.

4 Experiments

First experiments with SLIM NNs are currently conducted within the recent PowerPlay framework [62]. PowerPlay is designed to learn a more and more general problem solver from scratch. The idea is to let a general computer (say, a SLIM RNN) solve more and more tasks from the infinite set of all computable tasks, without ever forgetting solutions to previously solved tasks. At a given time, which task should be posed next? Human teachers in general do not know which tasks are not yet solvable by the SLIM RNN through generalization, yet easy to learn, given what’s already known. That’s why PowerPlay continually invents the easiest-to-add new task by itself.

To do this, PowerPlay incrementally searches the space of possible pairs of (1) new tasks, and (2) SLIM RNN modifications. The search continues until the first pair is discovered for which (a) the current SLIM RNN cannot solve the new task, and (b) the new SLIM RNN provably solves all previously learned tasks plus the new one. Here the new task may actually be to achieve a wow-effect by simplifying, compressing, or speeding up previous solutions.

Given a SLIM RNN that can already solve a finite known set of previously learned tasks, a particular AOPS algorithm [62] (compare Section 3.2) can be used to find a new pair that provably has properties (a) and (b). Once such a pair is found, the cycle repeats itself. This results in a continually growing set of tasks solvable by an increasingly more powerful solver. The continually increasing repertoire of self-invented problem-solving procedures can be exploited at any time to solve externally posed tasks.

How to represent tasks of the SLIM RNN? A unique task index is given as a constant RNN input in addition to the changing inputs from the environment manipulated by the RNN outputs. Once the halting units gets activated and the computation ends, the activations of a special pre-defined set of internal neurons can be viewed as the result of the computation. Essentially arbitrary computable tasks can be represented in this way by the SLIM RNN.

We can keep track of which tasks are dependent on each connection (Section 3.3). If the most recent task to be learned does not require changes in many weights, and if the changed weights do not affect many previous tasks, then validation may be very efficient. Now recall that PowerPlay prefers to invent tasks whose validity check requires little computational effort. This implicit incentive (to generate modifications that do not impact many previous tasks), leads to a natural decomposition of the space of tasks and their solutions into more or less independent regions. Thus, divide and conquer strategies are natural by-products of PowerPlay-trained SLIM NNs [62]. Experimental results will be reported in separate papers.

5 Conclusion

Typical recurrent self-delimiting (SLIM) neural networks (NNs) are general computers for running arbitrary self-delimiting parallel-sequential programs encoded in their weights. While certain types of SLIM NNs have been around for decades, e.g., [50], little attention has been given to certain fundamental benefits of their self-delimiting nature. During program execution, lists or stacks can be used to trace only those neurons and connections used at least once. This also allows for efficient resets of large NNs which may use only a small fraction of their weights per task. Efficient SLIM NN learning algorithms (LAs) track which weights are used for which tasks, to greatly speed up performance evaluations in response to limited weight changes. SLIM NNs are easily combined with techniques of asymptotically optimal program search. To address overfitting, instead of depending on pre-wired regularizers and hyper-parameters [2], SLIM NNs can in principle learn to select by themselves their own runtime and their own numbers of free parameters, becoming fast and slim when necessary. LAs may penalize the task-specific total length of connections used by SLIM NNs implemented on the 3-dimensional brain-like multi-processor hardware to expected in the future. This should encourage SLIM NNs to solve many subtasks by subsets of neurons that are physically close. Ongoing experiments with SLIM RNNs will be reported separately.

6 Acknowledgments

Thanks to Bas Steunebrink and Sohrob Kazerounian for useful comments.

References

  • [1] S. Behnke. Hierarchical Neural Networks for Image Interpretation, volume 2766 of Lecture Notes in Computer Science. Springer, 2003.
  • [2] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
  • [3] G. J. Chaitin. A theory of program size formally identical to information theory. Journal of the ACM, 22:329–340, 1975.
  • [4] D. C. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber. Deep neural networks segment neuronal membranes in electron microscopy images. In Advances in Neural Information Processing Systems NIPS, 2012, in press.
  • [5] D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber. Deep big simple neural nets for handwritten digit recogntion. Neural Computation, 22(12):3207–3220, 2010.
  • [6] D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber. Convolutional neural network committees for handwritten character classification. In 11th International Conference on Document Analysis and Recognition (ICDAR), pages 1250–1254, 2011.
  • [7] D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber. Flexible, high performance convolutional neural networks for image classification. In

    Intl. Joint Conference on Artificial Intelligence IJCAI

    , pages 1237–1242, 2011.
  • [8] D. C. Ciresan, U. Meier, J. Masci, and J. Schmidhuber. A committee of neural networks for traffic sign classification. In International Joint Conference on Neural Networks, pages 1918–1921, 2011.
  • [9] D. C. Ciresan, U. Meier, J. Masci, and J. Schmidhuber. Multi-column deep neural network for traffic sign classification. Neural Networks, in press, 2012.
  • [10] D. C. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In IEEE Conference on Computer Vision and Pattern Recognition CVPR 2012, 2012, in press. Long preprint arXiv:1202.2745v1 [cs.CV].
  • [11] S. Fernandez, Alex Graves, and J. Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI), 2007.
  • [12] B. Fritzke. A growing neural gas network learns topologies. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, NIPS, pages 625–632. MIT Press, 1994.
  • [13] K. Fukushima. Neocognitron: A self-organizing neural network for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4):193–202, 1980.
  • [14] F. A. Gers and J. Schmidhuber. LSTM recurrent networks learn simple context free and context sensitive languages. IEEE Transactions on Neural Networks, 12(6):1333–1340, 2001.
  • [15] F. A. Gers, N. Schraudolph, and J. Schmidhuber. Learning precise timing with LSTM recurrent networks. Journal of Machine Learning Research, 3:115–143, 2002.
  • [16] W. Gerstner and W. K. Kistler. Spiking Neuron Models. Cambridge University Press, 2002.
  • [17] T. Glasmachers, T. Schaul, Y. Sun, D. Wierstra, and J. Schmidhuber. Exponential Natural Evolution Strategies. In

    Proceedings of the Genetic and Evolutionary Computation Conference (GECCO)

    , 2010.
  • [18] F. J. Gomez, J. Koutník, and J. Schmidhuber. Compressed networks complexity search. In Springer, editor, Parallel Problem Solving from Nature (PPSN 2012), 2012.
  • [19] F. J. Gomez, J. Schmidhuber, and R. Miikkulainen. Efficient non-linear control through neuroevolution. Journal of Machine Learning Research JMLR, 9:937–965, 2008.
  • [20] A. Graves, S. Fernandez, F. J. Gomez, and J. Schmidhuber. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural nets. In ICML ’06: Proceedings of the International Conference on Machine Learning, 2006.
  • [21] A. Graves, S. Fernandez, M. Liwicki, H. Bunke, and J. Schmidhuber. Unconstrained on-line handwriting recognition with recurrent neural networks. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 577–584. MIT Press, Cambridge, MA, 2008.
  • [22] A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, and J. Schmidhuber. A novel connectionist system for improved unconstrained handwriting recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(5), 2009.
  • [23] A. Graves and J. Schmidhuber. Offline handwriting recognition with multidimensional recurrent neural networks. In Advances in Neural Information Processing Systems 21. MIT Press, Cambridge, MA, 2009.
  • [24] M. Graziano. The Intelligent Movement Machine: An Ethological Perspective on the Primate Motor System. 2009.
  • [25] N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2):159–195, 2001.
  • [26] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In S. C. Kremer and J. F. Kolen, editors, A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press, 2001.
  • [27] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
  • [28] J. H. Holland. Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, 1975.
  • [29] D. A. Huffman. A method for construction of minimum-redundancy codes. Proceedings IRE, 40:1098–1101, 1952.
  • [30] M. Hutter. Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Berlin, 2005. (On J. Schmidhuber’s SNF grant 20-61847).
  • [31] H. Jaeger. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, 304:78–80, 2004.
  • [32] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: a survey. Journal of AI research, 4:237–285, 1996.
  • [33] T. Kohonen. Self-Organization and Associative Memory. Springer, second edition, 1988.
  • [34] A. N. Kolmogorov. Three approaches to the quantitative definition of information. Problems of Information Transmission, 1:1–11, 1965.
  • [35] J. Koutnik, F. Gomez, and J. Schmidhuber. Evolving neural networks in compressed weight space. In Proceedings of the Conference on Genetic and Evolutionary Computation (GECCO-10), 2010.
  • [36] J. Koutník, F. Gomez, and J. Schmidhuber. Searching for minimal neural networks in Fourier space. In Proceedings of the 4th Annual Conference on Artificial General Intelligence. Atlantis Press, 2010.
  • [37] A. Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, Computer Science Department, University of Toronto, 2009.
  • [38] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998.
  • [39] Y. LeCun, F. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Proc. of Computer Vision and Pattern Recognition Conference, 2004.
  • [40] R. A. Legenstein and W. Maass. Neural circuits for pattern recognition with small total wire length. Theor. Comput. Sci., 287(1):239–249, September 2002.
  • [41] L. A. Levin. Universal sequential search problems. Problems of Information Transmission, 9(3):265–266, 1973.
  • [42] L. A. Levin. Laws of information (nongrowth) and aspects of the foundation of probability theory. Problems of Information Transmission, 10(3):206–210, 1974.
  • [43] M. Li and P. M. B. Vitányi. An Introduction to Kolmogorov Complexity and its Applications (2nd edition). Springer, 1997.
  • [44] W. Maass, T. Natschläger, and H. Markram. A fresh look at real-time computation in generic recurrent neural circuits. Technical report, Institute for Theoretical Computer Science, TU Graz, 2002.
  • [45] I. Rechenberg. Evolutionsstrategie - Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Dissertation, 1971. Published 1973 by Fromman-Holzboog.
  • [46] M. Ring, T. Schaul, and J. Schmidhuber. The two-dimensional organization of behavior. In Proceedings of the First Joint Conference on Development Learning and on Epigenetic Robotics ICDL-EPIROB, Frankfurt, August 2011.
  • [47] A. J. Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department, 1987.
  • [48] T. Schaul, J. Bayer, D. Wierstra, Yi Sun, M. Felder, F. Sehnke, T. Rückstieß, and J. Schmidhuber. PyBrain. Journal of Machine Learning Research, 11:743–746, 2010.
  • [49] D. Scherer, A. Müller, and S. Behnke. Evaluation of pooling operations in convolutional architectures for object recognition. In International Conference on Artificial Neural Networks, 2010.
  • [50] J. Schmidhuber. A local learning algorithm for dynamic feedforward and recurrent networks. Connection Science, 1(4):403–412, 1989.
  • [51] J. Schmidhuber. Dynamische neuronale Netze und das fundamentale raumzeitliche Lernproblem. Dissertation, Institut für Informatik, Technische Universität München, 1990.
  • [52] J. Schmidhuber. A fixed size storage time complexity learning algorithm for fully recurrent continually running networks. Neural Computation, 4(2):243–248, 1992.
  • [53] J. Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863–879, 1992.
  • [54] J. Schmidhuber. Discovering solutions with low Kolmogorov complexity and high generalization capability. In A. Prieditis and S. Russell, editors, Machine Learning: Proceedings of the Twelfth International Conference, pages 488–496. Morgan Kaufmann Publishers, San Francisco, CA, 1995.
  • [55] J. Schmidhuber. Discovering neural nets with low Kolmogorov complexity and high generalization capability. Neural Networks, 10(5):857–873, 1997.
  • [56] J. Schmidhuber. Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science, 13(4):587–612, 2002.
  • [57] J. Schmidhuber. The Speed Prior: a new simplicity measure yielding near-optimal computable predictions. In J. Kivinen and R. H. Sloan, editors,

    Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT 2002)

    , Lecture Notes in Artificial Intelligence, pages 216–228. Springer, Sydney, Australia, 2002.
  • [58] J. Schmidhuber. Bias-optimal incremental problem solving. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15 (NIPS 15), pages 1571–1578, Cambridge, MA, 2003. MIT Press.
  • [59] J. Schmidhuber. OOPS source code in crystalline format: http://www.idsia.ch/~juergen/oopscode.c, 2004.
  • [60] J. Schmidhuber. Optimal ordered problem solver. Machine Learning, 54:211–254, 2004.
  • [61] J. Schmidhuber. New millennium AI and the convergence of history. In W. Duch and J. Mandziuk, editors, Challenges to Computational Intelligence, volume 63, pages 15–36. Studies in Computational Intelligence, Springer, 2007. Also available as arXiv:cs.AI/0606081.
  • [62] J. Schmidhuber. PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem. Technical Report arXiv:1112.5309v1 [cs.AI], 2011.
  • [63] J. Schmidhuber, D. Ciresan, U. Meier, J. Masci, and A. Graves. On fast deep nets for AGI vision. In Proc. Fourth Conference on Artificial General Intelligence (AGI), Google, Mountain View, CA, 2011.
  • [64] J. Schmidhuber, M. Eldracher, and B. Foltin. Semilinear predictability minimization produces well-known feature detectors. Neural Computation, 8(4):773–786, 1996.
  • [65] J. Schmidhuber, D. Wierstra, M. Gagliolo, and F. J. Gomez. Training recurrent networks by EVOLINO. Neural Computation, 19(3):757–779, 2007.
  • [66] J. Schmidhuber, J. Zhao, and M. Wiering. Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. Machine Learning, 28:105–130, 1997.
  • [67] N. N. Schraudolph, M. Eldracher, and J. Schmidhuber. Processing images by semi-linear predictability minimization. Network: Computation in Neural Systems, 10(2):133–169, 1999.
  • [68] H. P. Schwefel. Numerische Optimierung von Computer-Modellen. Dissertation, 1974. Published 1977 by Birkhäuser, Basel.
  • [69] H. T. Siegelmann and E. D. Sontag. Turing computability with neural nets. Applied Mathematics Letters, 4(6):77–80, 1991.
  • [70] K. Sims. Evolving virtual creatures. In A. Glassner, editor, Proceedings of SIGGRAPH ’94 (Orlando, Florida, July 1994), Computer Graphics Proceedings, Annual Conference, pages 15–22. ACM SIGGRAPH, ACM Press, jul 1994. ISBN 0-89791-667-0.
  • [71] R. J. Solomonoff. A formal theory of inductive inference. Part I. Information and Control, 7:1–22, 1964.
  • [72] Kenneth O. Stanley and Risto Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary Computation, 10:99–127, 2002.
  • [73] Yi Sun, D. Wierstra, T. Schaul, and J. Schmidhuber. Efficient natural evolution strategies. In Genetic and Evolutionary Computation Conference, 2009.
  • [74] Yi Sun, D. Wierstra, T. Schaul, and J. Schmidhuber. Stochastic search using the natural gradient. In International Conference on Machine Learning (ICML), 2009.
  • [75] I. Sutskever, J. Martens, and G. Hinton. Generating text with recurrent neural networks. In L. Getoor and T. Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ’11, pages 1017–1024, New York, NY, USA, June 2011. ACM.
  • [76] R. Sutton and A. Barto. Reinforcement learning: An introduction. Cambridge, MA, MIT Press, 1998.
  • [77] V. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995.
  • [78] P. J. Werbos.

    Generalization of backpropagation with application to a recurrent gas market model.

    Neural Networks, 1, 1988.
  • [79] D. Wierstra, A. Foerster, J. Peters, and J. Schmidhuber. Recurrent policy gradients. Logic Journal of IGPL, 18(2):620–634, 2010.
  • [80] D. Wierstra, T. Schaul, J. Peters, and J. Schmidhuber. Natural evolution strategies. In Congress of Evolutionary Computation (CEC 2008), 2008.
  • [81] R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Back-propagation: Theory, Architectures and Applications. Hillsdale, NJ: Erlbaum, 1994.
  • [82] X. Yao. A review of evolutionary artificial neural networks. International Journal of Intelligent Systems, 4:203–222, 1993.