1 Introduction
To automatically construct an increasingly general problem solver, the recent PowerPlay framework [24] incrementally and efficiently searches the space of possible pairs of (1) new task descriptions (from the set of all computable task descriptions), and (2) modifications of the current problem solver. The search continues until the first pair is discovered for which (i) the current solver cannot solve the new task, and (ii) the modified solver provably solves all previously learned tasks plus the new one. Here a new task may actually be to simplify, compress, or speed up previous solutions, which in turn may invoke or partially reuse solutions to other tasks. The above process of discovering and solving a novel task can be repeated forever in openended fashion.
As a concrete implementation of the solver, we use a special neural network (NN) [2] architecture called the SelfDelimiting NN or SLIM NN [25]. Given a SLIM NN that can already solve a finite known set of previously learned tasks, an asymptotically optimal program search algorithm [9, 26, 20, 21] can be used to find a new pair that provably has properties (i) and (ii). Once such a pair is found, the cycle repeats itself. This results in a continually growing set of tasks solvable by an increasingly more powerful solver. The resulting repertoire of selfinvented problemsolving procedures or skills can be exploited at any time to solve externally posed tasks.
The SLIM NN has modifiable components, namely, its connection weights. By keeping track of which tasks depend on each connection, PowerPlay can reduce the time required for testing previously solved tasks with certain newly modified connection weights, because only tasks that depend on the changed connections need to be retested. If the solution of the most recently invented task does not require changes of many weights, and if the changed connections do not affect many previous tasks, then validation may be very efficient. Since PowerPlay’s efficient search process has a builtin bias towards tasks whose validity check requires little computational effort, there is an implicit incentive to generate weight modifications that do not impact too many previous tasks. This leads to a natural decomposition of the space of tasks and their solutions into more or less independent regions. Thus, divide and conquer strategies are natural byproducts of PowerPlay.
Note that active learning methods
[5] such as AdaBoost [6]have a totally different setup and purpose: there the user provides a set of samples to be learned, then each new classifier in a series of classifiers focuses on samples badly classified by previous classifiers. In openended
PowerPlay, however, all computational tasks (not necessarily classification tasks) can be selfinvented; there is no need for a predefined global set of tasks that each new solver tries to solve better, instead the task set continually grows based on which task is easy to invent and validate, given what is already known.Unlike our first implementations of curious / creative / playful agents from the 1990s [17, 29, 18] (cf. [1, 4, 13, 11]), PowerPlay provably (by design) does not have any problems with online learning—it cannot forget previously learned skills, automatically segmenting its life into a sequence of clearly identified tasks with explicitly recorded solutions. Unlike the task search of theoretically optimal creative agents [22, 23], PowerPlay’s task search is greedy, yet practically feasible. Here we present first experiments, extending recent work [28].
2 Notation & Algorithmic Framework for PowerPlay (Variant II)
We use the notation of the original paper [24], and briefly review the basics relevant here. denotes the set of finite bit strings over the binary alphabet , the natural numbers, the real numbers. The computational architecture of PowerPlay’s problem solver may be a deterministic universal computer, or a more limited device such as a feedforward NN. All problem solvers can be uniquely encoded [7]
or implemented on universal computers such as universal Turing Machines (TM)
[31]. Therefore, without loss of generality, we can assume a fixed universal reference computer whose inputs and outputs are elements of . Userdefined subsets define the sets of possible problem solvers and task descriptions. For example, may be the infinite set of all computable tasks, or a small subset thereof. defines a set of possible programs which may be used to generate or modify members of or . If our solver is a feedforward NN, then could be a highly restricted subset of programs encoding the NN’s possible topologies and weights,could be encodings of inputoutput pairs for a supervised learning task, and
could be an algorithm that modifies the weights of the network.The problem solver’s initial program is called . A particular sequence of unique task descriptions (where each ) is chosen or “invented” by a search method (see below) such that the solutions of can be computed by , the th instance of the program, but cannot be solved by . Each consists of a unique problem identifier that can be read by
through some builtin mechanism (e.g., input neurons of an NN as in Sec.
3 and 4), and a unique description of a deterministic procedure for deciding whether the problem has been solved. For example, a simple task may require the solver to answer a particular input pattern with a particular output pattern. Or it may require the solver to steer a robot towards a goal through a sequence of actions. Denote ; . A valid task () may require solving at least one previously solved task () more efficiently, by using less resources such as storage space, computation time, energy, etc. quantified by the function . The algorithmic framework (Alg. 1) incrementally trains the problem solver by finding that increase the set of solvable tasks. For more details, the reader is encouraged to refer to the original report [24].3 Experiment 1: SelfInvented Pattern Recognition Tasks
We start with pattern classification tasks. In this setup,
encodes an arbitrary set of weights for a fixedtopology multilayer perceptron (MLP). The MLP maps twodimensional, realvalued input vectors from the unit square to binary labels; i.e.,
: . The output label is 0 or 1 depending on whether or not the realvalued activation of the MLP’s single output neuron exceeds 0.5. Binary programs of length compute tasks and modify as follows. If (the first bit of ) is 0, this will specify that the current task is to simplify by weight decay, under the assumption that smaller weights are simpler. Such programs implement compression tasks. But if is 1, then the target label of the current task candidate will be given by the next bit , and ’s twodimensional input vector will be uniquely encoded by the remainder of ’s bit string, , as follows. The string is taken as the binary representation of an integer . Then a 2D Gaussian pseudorandom number generator is used to generate numbers , where and are used as 2D coordinates in the unit square. Now the task is to label the coordinates as .The random number generator is reseeded by the same seed every time a new task search begins, thus ensuring a deterministic search order. Since we only have two labels in this experiment, we do not need as we can choose the target label to be different from the label currently assigned by the MLP to the encoded input. To run for steps (on a training set of patterns so far) means to execute epochs of gradient descent on the training set and check whether the patterns are correctly classified. Here one step always refers to the processing of a single pattern (either a forward or backward pass), regardless of the task.
Assume now that PowerPlay has already learned a version of called able to classify
previously invented training patterns (
). Then the next task is defined by a simple enumerative search in the style of universal search [10, 26, 21], which combines task simplification and systematic runtime growth (see Alg. 2).Since the compression task code is the single bit ‘0’, roughly half of the total search time is spent on simplification, the rest is spent on the invention of new training patterns that break the MLP’s current generalization ability.
To monitor the evolution of the solver’s generalization map, after each successful search for a new task, the labels of grid points are plotted in a rather dense grid on the unit square (Fig. 1), to see how the MLP maps to . As expected, the experiments show that in the beginning PowerPlay
prefers to invent and learn simple linear functions. However, there is a phase transition to more complex nonlinear functions after a few tasks, indicating a new developmental stage
[14, 19, 12]. This is a natural byproduct of the search for simple tasks—they are easier to invent and verify than more complex nonlinear tasks. As learning proceeds, we observe that the decision boundary becomes increasingly nonlinear, because the system has to come up with tasks which the solver cannot solve yet, but the solver becomes increasingly more powerful, so the system has to invent increasingly harder tasks. On the other hand, the search time for solutions to harder and harder tasks need not grow over time, since new solutions do not have to be learnt from scratch, but may reuse previous solutions encoded as parts of the previous solver.4 Experiment 2: SelfInvented Tasks Involving Motor Control and Internal Abstractions
4.1 SelfDelimiting (SLIM) Programs Run on A Recurrent Neural Network (RNN)
Here we describe experiments with a PowerPlaybased RNN that continually invents novel sequences of actions affecting an external environment, over time becoming a more and more general solver of selfinvented problems.
RNNs are general computers that allow for both sequential and parallel computations. Given enough neurons and an appropriate weight matrix, an RNN can compute any function computable by a standard PC [16]. We use a particular RNN named SLIM RNN [25] to define for our experiment. Here we briefly review its basics.
The th computational unit or neuron of our SLIM RNN is denoted (). is the realvalued weight on the directed connection from to . At discrete time step of a finite interaction sequence with the environment, denotes the realvalued activation of . There are designated neurons serving as online inputs, which read realvalued observations from the environment, and outputs whose activations encode actions in the environment, e.g., the movement commands for a robot. We initialize all = 0 and compute where may be of the form , or , or if and 0 otherwise. To program the SLIM RNN means to set the weight matrix .
A special feature of the SLIM RNN is that it has a single halt neuron with a fixed haltthreshold. If at any time its activation exceeds the haltthreshold, the network’s computation stops. Thus, any network topology in which there exists a path from the online or task inputs to the halt neuron can run selfdelimiting programs [10, 3, 26, 21]
studied in the theory of Kolmogorov complexity and algorithmic probability
[27, 8]. Inspired by a previous architecture [15], neurons other than the inputs and outputs in our RNN are arranged in winnertakeall subsets (WITAS) of neurons each ( was used for this experiment). At each time step , is set to 1 if is a winning neuron in some WITAS (the one with the highest activation), and to 0 otherwise. This feature gives the SLIM RNN the potential to modularize itself, since neurons can act as gates to various selfdetermined regions of the network. By regulating the information flow, the network may use only a fraction of the weights for a given task.Apart from the online input, output and halt neurons, a fixed number of neurons are set to be task inputs. These inputs remain constant for and serve as selfgenerated task specifications. Finally, there is a subset of internal state neurons whose activations are considered as the final outcome when the program halts. Thus a noncompression task is: Given a particular task input, interact with the environment (read online inputs, produce outputs) until the network halts and produces a particular internal state—the abstract goal—which is read from the internal state neurons. Since the SLIM RNN is a general computer, it can represent essentially arbitrary computable tasks in this way. Fig. 2 illustrates the network’s activation spreading for a particular task. A more detailed discussion of SLIM RNNs and their efficient implementation can be found in the original report [25].
The SLIM RNN is trained on the fovea environment described in Sec. 4.2 using the PowerPlay framework according to Algorithm 3 below. The difference to Algorithm 2 lies in task setspecific details such as the encoding of task inputs and the definition of ‘inventing and learning’ a task. The bit string now encodes a set of real numbers between 0 and 1 which denote the constant task inputs for this program. Given a new set of task inputs, the new task is considered learned if the network halts and reaches a particular internal state (potentially after interacting with the environment), and remains able to properly reproduce the saved internal states for all previously learned tasks. This is implemented by first checking if the network can halt and produce an internal state on the newly generated task inputs. Only if the network cannot halt within a chosen fraction of the time budget dictated by , the length of program , the remaining budget is used for trying to learn the task using a simple mutation rule, by modifying a few weights of the network. When is the single bit ‘0’, the task is interpreted as a compression task. Here compression either means a reduction of the sum of squared weights without increasing the total number of connection usages by all previously learned tasks, or a reduction of the total number of connection usages on all previously learned tasks without increasing the sum of squared weights.
Since our Powerplay variant methodically increases search time, half of which is used for compression, it automatically encourages the network to invent novel tasks that do not require many changes of weights used by many previous tasks.
Our SLIM RNN implementation efficiently resets activations computed by the numerous unsuccessful tested candidate programs. We keep track of used connections and active (winner) neurons at each time step, to reset activations such that tracking/undoing effects of programs essentially does not cost more than their execution.
4.2 RNNControlled Fovea Environment
The environment for this experiment consists of a static image which is observed sequentially by the RNN through a fovea, whose movement it can control at each time step. The size of the fovea is pixels; it produces 25 real valued online inputs (normalized to ) by averaging the pixel intensities over regions of varying sizes such that it has higher resolution at the center and lower resolution in the periphery (Fig. 3). The fovea is controlled using 8 realvalued outputs of the network, and a parameter winthreshold. Out of the first four outputs, the one with the highest value greater than winthreshold is interpreted as a movement command: up, down, left, or right. If none of the first four outputs exceeds the threshold, the fovea does not move. Similarily, the next four outputs are interpreted as the fovea step size on the image (3, 9, 27 or 81 pixels in case of exceeding the threshold, 1 pixel otherwise).
4.3 Results
The network’s internal states can be viewed as abstract summaries of its trajectories through the fovea environment and its parallel “internal thoughts.” The system invents more and more novel skills, each breaking the generalization ability of its previous SLIM NN weight matrix, without forgetting previously learned skills. Within 8 hours on a standard PC, a SLIM RNN consisting of 20 WITAS, with 4 neurons in each WITAS, invented 67 novel action sequences guiding the fovea before halting. These varied in length, consuming up to 27 steps. Over time the SLIM NN not only invented new skills to solve novel tasks, but also learned to speed up solutions to previously learned tasks, as shown in Fig. 4. For clarity, all figures presented here depict aspects of this same run, though results were consistent over many different runs.
The SLIM NN also learns to reduce the interactions with the environment. Fig. 5 shows the number of interactions required to solve certain previously learned fovea control tasks. Here an “interaction” is a SLIM NN computation step that produces at least one nonzero output neuron activation. General trend over different tasks and runs: the interactions decrease over time. That is, the SLIM NN essentially learns to build internal representations of its interaction with the environment, due to PowerPlay’s continual builtin pressure to speed up and simplify and generalize.
The SLIM NN often uses partially overlapping subsets of connection weights for generating different selfinvented trajectories. Fig. 6 shows that not all connections are used for all tasks, and that the connections used to solve individual tasks can become progressively more separated. In general, the variation in degree of separation depends on network parameters and environment.
As expected, PowerPlaybased SLIM NNs prefer to modify only few connections per novel task. Randomly choosing one to fifteen weight modifications per task, on average only weights were changed to invent a new skill—see Fig. 7. Why? Because PowerPlay is always going for the novel task that is fastest to find and validate, and fewer weight changes tend to affect fewer previously learned tasks; that is, less time is needed to revalidate performance on previous tasks. In this way PowerPlay avoids a naively expected slowdown linear in the number of tasks. Although the number of skills that must not be forgotten grows all the time, the search time for new skills does not at all have to grow in proportion to the number of previously solved tasks.
As a consequence of its bias towards fasttovalidate solutions, the PowerPlaybased SLIM NN automatically selfmodularizes. The SLIM RNN tested above had 1120 connections. Typically, 600 of them were used to solve a particular task, but on average less than three of them were changed. This means that for each newly invented task, the system reuses a lot of previously acquired knowledge without modification. The truly novel aspects of the task and its solution often can be encoded within just a handful of bits.
This type of selfmodularization is more general than what can be found in traditional (noninventive) modular reinforcement learning (RL) systems whose action sequences are chunked into macros to be reused by higherlevel macros, like in the options framework
[30], or in hierarchical RL [32]. Since the SLIM RNN is a general computer, and its weights are its program, subsets of the weights can be viewed as subprograms, and new subprograms can be formed from old ones in essentially arbitrary computable ways, like in general incremental program search [21].5 Discussion and Outlook
PowerPlay for SLIM RNN represents a greedy implementation of central aspects of the Formal Theory of Fun and Creativity [22, 23]. The setup permits practically feasible, curious/creative agents that learn hierarchically and modularly, using general computational problem solving architectures. Each new task invention either breaks the solver’s present generalization ability, or compresses the solver, or speeds it up.
We can know precisely what is learned by PowerPlaying SLIM NN. The selfinvented tasks are clearly defined by inputs and abstract internal outcomes / results. Human interpretation of the NN’s weight changes, however, may be difficult, a bit like with a baby that generates new internal representations and skills or skill fragments during play. What is their “meaning” in the eyes of the parents, to whom the baby’s internal state is a black box? For example, in case of the fovea tasks the learner invents certain inputdependent movements as well as abstractions of trajectories in the environment (limited by its vocabulary of internal states). The RNN weights at any stage encode the agent’s present (possibly limited) understanding of the environment and what can be done in it.
PowerPlay has no problems with noisy inputs from the environment. However, a noisy version of an old, previously solved task must be considered as a new task, because in general we do not know what is noise and what is not. But over time PowerPlay can automatically learn to generalize away the “noise,” eventually finding a compact solver that solves all “noisy” instances seen so far.
Our first experiments focused on developmental stages of purely creative systems, and did not involve any externally posed tasks yet. Future work will test the hypothesis that systems that have been running PowerPlay for a while will be faster at solving many userprovided tasks than systems without such purely explorative components. This hypothesis is inspired by babies who creatively seem to invent and learn many skills autonomously, which then helps them to learn additional teacherdefined external tasks. We intend to identify conditions under which such knowledge transfer can be expected.
Appendix A Appendix: Implementation details
The SLIM RNN used for Experiment 2 (fovea control) is constructed as follows:
Let the number of input, output and state neurons in the network be n_input, n_output and n_state, respectively. Let nb_comp = number of computation blocks each with block_size neurons. Thus there are nb_compblock_size computation neurons in the network.
The network is wired as follows. Each task input neuron is connected to nb_comp computation neurons at random. Each online input neuron is connected to nb_comp/10 neurons at random. Each internal state neuron receives connections from nb_comp/2 random computation neurons. The halt neuron recieves connections from nb_comp/2 random computation neurons. nb_compn_output random computation neurons are connected to random output neurons. Each neuron in each computation block is randomly connected to block_size other computation neurons.
We used nb_comp = 20, block_size = 4, and n_state = 3 with n_input = 25 and n_output = 8missing for the fovea control task. The haltthreshold was set to 3, and the WITAS and fovea control winthresholds were set to . All connection weights were initialized to random values in . The cost of using a connection (consuming part of the time_budget) was set to for all connections. The mutation rule was as follows. For noncompression tasks, the network is first run using the new task inputs to check if the task can already be solved by generalization. If not, we randomly generate an integer number between 1 and 1/50th of all connections used during the unsuccessful run, and randomly change weights by adding to them a uniformly random number in . For compression tasks, we randomly generate a number between 1 and 1/50th of all connections used for any of the tasks in the current repertoire, and randomly modify of those connections.
The time budget fraction available to check whether a candidate task is not yet solvable by was chosen randomly between 0 and time_budget/2. For compression tasks, the sum of squared weights had to decrease by at least a factor of 1/1000 to be acceptable.
Acknowledgments
PowerPlay [24] and selfdelimiting recurrent neural networks (SLIM RNN) [25] were developed by J. Schmidhuber and implemented by R.K. Srivastava and B.R. Steunebrink. We thank M. Stollenga and N.E. Toklu for their help with the implementations. This research was funded by the following EU projects: IMCLeVeR (FP7ICTIP231722) and WAY (FP7ICT288551).
References
 [1] A. Barto. Intrinsic motivation and reinforcement learning. In G. Baldassarre and M. Mirolli, editors, Intrinsically Motivated Learning in Natural and Artificial Systems. Springer, 2012. In press.
 [2] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
 [3] G. J. Chaitin. A theory of program size formally identical to information theory. Journal of the ACM, 22:329–340, 1975.
 [4] P. Dayan. Exploration from generalization mediated by multiple controllers. In G. Baldassarre and M. Mirolli, editors, Intrinsically Motivated Learning in Natural and Artificial Systems. Springer, 2012. In press.
 [5] V. V. Fedorov. Theory of optimal experiments. Academic Press, 1972.
 [6] Y. Freund and R. E. Schapire. A decisiontheoretic generalization of online learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997.
 [7] K. Gödel. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik, 38:173–198, 1931.
 [8] A. N. Kolmogorov. Three approaches to the quantitative definition of information. Problems of Information Transmission, 1:1–11, 1965.
 [9] L. A. Levin. Universal sequential search problems. Problems of Information Transmission, 9(3):265–266, 1973.

[10]
L. A. Levin.
Laws of information (nongrowth) and aspects of the foundation of probability theory.
Problems of Information Transmission, 10(3):206–210, 1974.  [11] U. Nehmzow, Y. Gatsoulis, E. Kerr, J. Condell, N. H. Siddique, and T. M. McGinnity. Novelty detection as an intrinsic motivation for cumulative learning robots. In G. Baldassarre and M. Mirolli, editors, Intrinsically Motivated Learning in Natural and Artificial Systems. Springer, 2012. In press.
 [12] H. Ngo, M. Ring, and J. Schmidhuber. Compression progressbased curiosity drive for developmental learning. In Proceedings of the 2011 IEEE Conference on Development and Learning and Epigenetic Robotics IEEEICDLEPIROB. IEEE, 2011.
 [13] P.Y. Oudeyer, A. Baranes, and F. Kaplan. Intrinsically motivated learning of real world sensorimotor skills with developmental constraints. In G. Baldassarre and M. Mirolli, editors, Intrinsically Motivated Learning in Natural and Artificial Systems. Springer, 2012. In press.
 [14] J. Piaget. The Child’s Construction of Reality. London: Routledge and Kegan Paul, 1955.
 [15] J. Schmidhuber. A local learning algorithm for dynamic feedforward and recurrent networks. Connection Science, 1(4):403–412, 1989.
 [16] J. Schmidhuber. Dynamische neuronale Netze und das fundamentale raumzeitliche Lernproblem. Dissertation, Institut für Informatik, Technische Universität München, 1990.
 [17] J. Schmidhuber. Curious modelbuilding control systems. In Proceedings of the International Joint Conference on Neural Networks, Singapore, volume 2, pages 1458–1463. IEEE press, 1991.

[18]
J. Schmidhuber.
Artificial curiosity based on discovering novel algorithmic
predictability through coevolution.
In P. Angeline, Z. Michalewicz, M. Schoenauer, X. Yao, and
Z. Zalzala, editors,
Congress on Evolutionary Computation
, pages 1612–1618. IEEE Press, 1999.  [19] J. Schmidhuber. Exploring the predictable. In A. Ghosh and S. Tsuitsui, editors, Advances in Evolutionary Computing, pages 579–612. Springer, 2002.
 [20] J. Schmidhuber. Biasoptimal incremental problem solving. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15 (NIPS 15), pages 1571–1578, Cambridge, MA, 2003. MIT Press.
 [21] J. Schmidhuber. Optimal ordered problem solver. Machine Learning, 54:211–254, 2004.
 [22] J. Schmidhuber. Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts. Connection Science, 18(2):173–187, 2006.
 [23] J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (19902010). IEEE Transactions on Autonomous Mental Development, 2(3):230–247, 2010.
 [24] J. Schmidhuber. PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem. Technical Report arXiv:1112.5309v1 [cs.AI], 2011.
 [25] J. Schmidhuber. Selfdelimiting neural networks. Technical Report IDSIA0812, arXiv:1210.0118v1 [cs.NE], IDSIA, 2012.
 [26] J. Schmidhuber, J. Zhao, and M. Wiering. Shifting inductive bias with successstory algorithm, adaptive Levin search, and incremental selfimprovement. Machine Learning, 28:105–130, 1997.
 [27] R. J. Solomonoff. A formal theory of inductive inference. Part I. Information and Control, 7:1–22, 1964.
 [28] R. K. Srivastava, B. R. Steunebrink, M. Stollenga, and J. Schmidhuber. Continually adding selfinvented problems to the repertoire: First experiments with powerplay. In Proceedings of the 2012 IEEE Conference on Development and Learning and Epigenetic Robotics IEEEICDLEPIROB. IEEE, 2012.
 [29] J. Storck, S. Hochreiter, and J. Schmidhuber. Reinforcement driven information acquisition in nondeterministic environments. In Proceedings of the International Conference on Artificial Neural Networks, Paris, volume 2, pages 159–164. EC2 & Cie, 1995.
 [30] R. S. Sutton, D. Precup, and S. Singh. Intraoption learning about temporally abstract actions. In Proceedings of the Fifteenth International Conference on Machine Learning. Morgan Kaufman, 1998.
 [31] A. M. Turing. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, Series 2, 41:230–267, 1936.
 [32] M. Wiering and J. Schmidhuber. HQlearning. Adaptive Behavior, 6(2):219–246, 1998.
Comments
There are no comments yet.