Novelty Producing Synaptic Plasticity

02/10/2020 ∙ by Anil Yaman, et al. ∙ Università di Trento TU Eindhoven 0

A learning process with the plasticity property often requires reinforcement signals to guide the process. However, in some tasks (e.g. maze-navigation), it is very difficult (or impossible) to measure the performance of an agent (i.e. a fitness value) to provide reinforcements since the position of the goal is not known. This requires finding the correct behavior among a vast number of possible behaviors without having the knowledge of the reinforcement signals. In these cases, an exhaustive search may be needed. However, this might not be feasible especially when optimizing artificial neural networks in continuous domains. In this work, we introduce novelty producing synaptic plasticity (NPSP), where we evolve synaptic plasticity rules to produce as many novel behaviors as possible to find the behavior that can solve the problem. We evaluate the NPSP on maze-navigation on deceptive maze environments that require complex actions and the achievement of subgoals to complete. Our results show that the search heuristic used with the proposed NPSP is indeed capable of producing much more novel behaviors in comparison with a random search taken as baseline.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

During a learning process, the fitness value of each behavior can be measured and used as reinforcement signal to guide the learning process. For instance, in a maze-navigation task, a fitness measure such as the distance of an agent to the goal position can be used as reinforcement to optimize its behavior. However, in realistic scenarios, this fitness measure might not be available since the goal position is not known.

Figure 1. An illustration of a hypothetical fitness landscape where there is only two possible discrete fitness values (i.e., 1: global optimum, 0: global minimum). The - and -axes show the candidate solutions and their fitness values respectively. Typically, only a small set of candidate solutions associated with the high fitness value that indicates the solution to the problem. For instance, in the case of maze-navigation, only the behaviors that achieve to the goal position have a fitness value of 1, the rest of the behaviors that fail to achieve to the goal position have a fitness value of 0. Only a small fraction of all possible behaviors can achieve to the goal position.

We consider a learning process where it is very difficult (or impossible) to measure the fitness of a behavior of an agent to provide reinforcement signals. We refer to this problem as the needle in a haystack problem (Hinton and Nowlan, 1996) where the needle refers to a solution (i.e. a behavior that can solve the task) and the haystack refers to the search space (i.e. all possible behaviors).

A hypothetical case of an illustration of the fitness landscape of the needle in a haystack problem is given in Figure 1. The - and -axes show the solutions (behaviors) and their fitness values respectively. The problem assumes that there is no available metric to measure quantitatively the fitness of a behavior: the task is either solved or not. Therefore, fitness values 1 and 0 indicate the successful and failed behaviors. There may be more than one behavior that can provide a solution to the problem; on the other hand, we assume that the majority of the behaviors fail to solve the task.

Novelty search and MAP-Elites algorithms have been successfully used in tasks where the use of fitness values is often detrimental for finding good solutions via traditional (fitness-driven) evolutionary search (Lehman and Stanley, 2008; Mouret and Clune, 2015). These algorithms may be beneficial for solving the needle in a haystack problems. However, they require external memory to store encountered solutions and, in the case of MAP-Elites, fitness values to map the solutions to a predefined feature space.

In this work, we propose novelty producing synaptic plasticity (NPSP) for the needle in a haystack problem, where we use synaptic plasticity to produce novel behaviors. The synaptic plasticity performs changes in connection weights of artificial neural networks (ANNs) based on the local activation of neurons. We use genetic algorithms to optimize the NPSP rules to produce as many novel behaviors as possible to find the behavior that can solve the task. In contrast to novelty search, the NPSP performs changes in a single ANN (controlling a single agent) without keeping track of produced behaviors.

We evaluate the performance of the NPSP on maze-navigation task using deceptive maze environments which require complex actions and the achievements of subgoals to complete. During the evaluation phase, we assume that the knowledge of the fitness value, in terms of the distance of the agent to the goal position, is not available. Our results show that the proposed NPSP produces a large number of behaviors relative to a random search that may eventually help finding the solution to the problems when the fitness function is not known or difficult to evaluate.

The rest of the paper is organized as follows: in Section 2, we provide background knowledge on evolution of synaptic plasticity, then we introduce our method to produce novelty producing synaptic plasticity in ANNs. In Section 3, we provide the details about our experimental setup where we discuss the test environments, agent architecture, genetic algorithm and benchmark algorithms. In Section 4 we provide our results, and finally, in Section 5, we discuss our conclusions and future work.

2. Evolving Plasticity for Producing Novelty

The synaptic plasticity

refers to the property of biological neural networks (BNNs) that allows them to change their configuration during their lifetime. These changes are known to occur in synapses (i.e. connections between neurons) based on the local information 

(Holtmaat and Svoboda, 2009). Hebbian learning was proposed to model the synaptic plasticity in ANNs (Hebb, 1949; Kuriscak et al., 2015). In this form of learning, synaptic plasticity rules are used to adjust the weight of a connection between two neurons based on the correlations between the neurons prior (pre-synaptic neuron) and posterior (post-synaptic neuron) with respect to the connection. Moreover, reinforcement signals are used to guide the learning process by performing these adjustments in order to match the neuron outputs with the desired behaviors.

The basic form of Hebbian learning can suffer from instability because when an increase in the connection weight between two neurons leads to an increase in their correlations this in turn causes further increase in their connection weights. To reduce this effect, several variants of Hebbian learning rules have been proposed in the literature (Vasilkoski et al., 2011). Nevertheless, further optimization may be needed to find learning rules that can produce stable and coherent learning for certain learning scenarios.

Inspired by the evolution of learning in BNNs, evolutionary computing has been used to optimize the plasticity rules to produce plasticity property in ANNs 

(Soltoggio et al., 2018). A number of previous works optimized the type of Hebbian learning rule and its parameters (Floreano and Urzelai, 2000; Niv et al., 2002); some other works used more complex models (i.e. additional ANNs) to perform synaptic changes (Risi and Stanley, 2010; Orchard and Wang, 2016).

Here, we optimize the synaptic plasticity rules to encourage the novel behaviors. This may especially be beneficial in cases where there is no information (i.e. fitness values, reinforcement signals) about the problem to guide the learning process.

In an ANN, the activation of a post-synaptic neuron is computed by:

(1)

where is the pre-synaptic neuron activation, is the connection weight between pre- and post-synaptic neurons, and

is the activation function. We use a step function which assigns 0 to

if , and 1 otherwise.

At the end of an episode (i.e. a predefined number of action steps that the agent is allowed to perform the task), the connection weights are as follows:

(2)
(3)

Finally, we scale the incoming connections in order to have a unit length:

(4)

This avoids increasing/decreasing the connection weights indefinitely, and also introduces synaptic competition.

The eligibility traces were proposed to trace the pairwise activations of pre- and post-synaptic neurons during an episode (Gerstner et al., 2018). Data structures inspired by the eligibility traces were previously employed to associate the pairwise neuron activations with reinforcement signals (Yaman et al., 2019; Izhikevich, 2007; Soltoggio and Steil, 2013). Shown in Table 1, we use neuron activation traces (NATs) in each synapse to keep track of their activations (i.e. frequencies: ) to be used in synaptic plasticity rules. We employ a threshold to convert the frequencies to binary representation. For instance, if a frequency value is lower than , we assign 0, otherwise 1.

Table 1. The NAT data structure. For each connection , stores the number of occurrences of each type of binary activation states of neuron pairs .

The goal in this case is to find how to perform synaptic changes based on the binary NAT values such that the network produces novel behaviors. Thus, as illustrated in Table 2, we use a genetic algorithm (GA) to find weight updates (

) for all possible states of 4 dimensional binary vectors. Each of these synaptic updates can be one of three values

, that indicate increase, stable or decrease respectively (thus there are a total of possible plasticity rules).

1 1 1 1
Table 2.

A list of binarized NATs states (based on a threshold

) are shown in a tabular form. The synaptic changes are performed based on the NATs.

The reason for using binary representations is to limit the search space. In addition, discrete rules (as shown in Table 2) allow interpretability since they can be converted into a set of “if-then” statements. This may be more difficult when more complex functions (e.g., ANNs) are used to perform the synaptic changes.

3. Experimental Setup

In this section, we provide the details of our experimental setup. We designed deceptive maze environments and used the NPSP to produce novel behaviors to find a behavior that can achieve the goal. Since we do not use fitness values, we take random search and random walk algorithms as baseline. These tasks require complex actions to solve. Therefore, we use recurrent neural networks with various sizes. We discuss the details of the environments, agent architecture, genetic algorithm to evolve the NPSP rules and benchmark algorithms in following sections.

3.1. Deceptive Maze Environments

We perform experiments on environments that we refer to as deceptive maze (DM), because in these cases it is not straightforward to specify a fitness function to solve these tasks. Moreover, the use of simple fitness functions (such as the Euclidean distance to the goal) is usually deceiving to solve these problems since these functions are usually prone to get stuck in a local optimum and thus prevent finding good solutions (Auerbach et al., 2016; Lehman and Stanley, 2008).

(a) DM11 door closed (b) DM12 door opened (c) DM21 door closed (d) DM22 door opened
Figure 2. An illustration of two deceptive maze environments. Figures (a)a and (b)b are two versions of the first environment, and Figures (c)c and (d)d are two versions of the second environment. The only difference between the two versions of the same environment is an opening on the middle wall that allows agents to travel from the left room to the right. Labels “1” and “2” show two independent starting positions of the agent.

Visual illustrations of the DM environments are shown in Figure 2. The environments consist of cells. Each cell can be occupied by one of five possibilities: empty, wall, goal, button, agent, color-coded in white, black, blue, green, red respectively. The starting position of the agent is illustrated in red. There are two starting positions of the agent labelled as “1” and “2”. These starting positions are tested separately.

Figures (a)a and (b)b show two versions of the same environment, that we refer to as DM11 and DM12, and Figures (c)c and (d)d show two versions of the same environment we refer as DM21 and DM22. The difference between two versions of the same environment is that there is an opening (door) in the middle of the wall to allow the agent to travel between rooms when it is open.

Starting from one of the starting positions, the behavior that solves the task involves first going to the button area (in green) and perform a “press button” action. In this case, the door in the middle of the wall opens. The agent is then required to pass through this opening and reach the goal position (in blue).

3.2. Agent Architecture

An illustration of the architecture of the agents used for the deceptive maze tasks is given in Figure 3. In each action step, the agent can take the nearest right, front, and left cells as inputs and perform one of the actions as: stop, left, right, straight, press. Each input sensor can sense if there is a wall or not (represented as 0 and 1 respectively). The door opens when the press action is performed only if the agent is within the button area (green). Multiple press actions while the agent is within the button area do not have any effect.

(a) (b) (c)
Figure 3. (a): The sensory inputs and action outputs of the recurrent neural networks that are used to control the agents; (b) and (c): The architectures of the network without and with a hidden layer respectively.

Illustrated in Figures (b)b and (c)c, we use two types of RNNs (without and with a hidden layer) to control the agents. The network shown in Figure (b)b consists of 40 connection parameters as: ( refers to the bias), (except self-node connections), and the network shown in Figure (c)c consists of 15 hidden neurons and 4 sets of connections between the layers as: , (except self-node connections), , . Thus, the network has in total 420 parameters. We used the network without the hidden layer and the network with 15 hidden neurons to limit the computation during the evaluation process. We further tested evolved NPSP rules on networks with 30 and 50 hidden neurons. These networks have 1290 and 3150 parameters respectively.

As for the networks without a hidden layer, the activations of the output layer are computed as:

(5)

In the case of networks with a hidden layer, the activations of the hidden and output layers are computed as:

(6)
(7)

where the parameters and are added to scale the recurrent and feedback connections. Parameter denotes the time step.

3.3. Genetic Algorithm

The NPSP rules consist of discrete and continuous parts. A standard GA was used to evolve the discrete parts of the NPSP rules. The discrete parts of the genotypes consist of 16 genes, initialized randomly from . The continuous parts of the genotypes are initialized randomly from these ranges: . Thus, the genotype of the individuals is represented by a 20-dimensional discrete/real-valued vector (19-dimensional in the case without a hidden layer).

We evaluate each NPSP rule on two environments, illustrated in Figure 2, for two starting positions and three trials each. Thus, in total, we perform independent trials. Each of these trials consists of episodes of learning process, where each episode consists of 250 action steps to reach the goal from the starting position.

The fitness value of an individual NPSP rule is computed as:

(8)

which is based on the average number of novel behaviors the NPSP rule produces. To calculate that, we abstract and record the behavior of an agent during each episode and append it to the behavior set , and find the average number of novel (unique) behaviors per trial.

Figure 4. An example illustration of the environment representation that is used to abstract the behavior of the agents.

The behavior abstraction is performed as follows. The environment is divided in squares, as shown in Figure 4, and each square is given two unique identifiers (ids) (e.g. “1” and “1*”) to distinguish between two states of the agent: “located in the square” and “located in the square and pressing the button”. Inspired by Pugh et al., (Pugh et al., 2015), we abstract the behavior of an agent by recording its trajectory based on the locations visited, and save it as a sequence of ids in a string form. For instance, one example string could be:“13-13*-12-11-4-3-2-1-8-9-10-10*”. This string means that the agent started from square 13, next performed a press button action while it was in square 13, next passed through of a sequence of squares 12, 11, 4, , 10, then finally performed a press button action while it was on square 10. We do not repeat the square id if the agent is staying in the same square for more than one time step. We refer to a string like this as a behavior. We collect the behavior in each episode and find how many novel behaviors the NPSP rule is able to produce during one trial (500 learning episodes). This is achieved by finding how many novel sequences were generated. Thus, we aim to maximize the number of novel behaviors produced, in the attempt to find the behavior that solves the task.

(a) (b)
Figure 5. The distances of each cell to the goal position in each environment are shown as heatmaps where the intensity of red color indicates lower distance.

The distances of each cell to the goal position in the environments are measured as shown in Figure 5. During each episode, the closest distance to the goal position is also recorded.

The comparison is performed based on two performance measures: “novelty” and “distance”. The latter is the average of the smallest distances to the goals that an agent achieved during the episodes. Both these measures are scaled within a certain range to make it easy to perform comparisons between the results of different runs related to different starting points and different environments. Thus, the novelty measure is divided by the number of episodes, to scale it between 0 and 1. The higher the novelty score of an agent, the more novel behaviors it has produced. The distance measure is adjusted depending on whether the agent manages to pass through the door to the second room where the goal is located. If the agent is not able to pass to the second room, its distance measure is updated as:

(9)

Otherwise (if the agent manages to go to the second room where the goal is located), its distance measure is updated as:

(10)

where and are constant values indicating the maximum distance to the goal, and the maximum distance to the goal in the second room. Thus, the updated distance measure is between 0 and 2. If it is greater than 1, it means that the agent was not able to pass to the second room; and if it is smaller than 1, it means that the agent managed to pass to the second room. Overall, its value indicates the distance to the goal position, the smaller means the closer.

We use a population size of 14 and employ a roulette wheel selection operator with an elite number of four. We use a 1-point crossover

operator with a probability of 0.5 and a custom

mutation

operator which re-samples each discrete dimension of the genotype with a probability of 0.15 and performs a Gaussian perturbation with zero mean and 0.1 standard deviation for the continuous parameters. We run the evolutionary process for 100 generations. In each generation of the evolutionary process, we store the NPSP rules that produced the largest number of novel behaviors, and the NPSP rules that achieved the minimum distance to the goal positions.

3.4. Benchmark Algorithms

We use two analogous algorithms, Random Search (RS) and Random Walk (RW), to perform comparisons with the NPSP rules. The RS and RW algorithms use a single solution to perform synaptic changes after every episode. However, they perform synaptic changes by random initialization and perturbation, respectively, without using any domain knowledge on the neuron activation as it is introduced with the NPSP rules.

(a) (b) (c)
Figure 6. The learning process of the RNNs that controls the agents using: (a) random search, (b) random walk, (c) novelty producing synaptic plasticity.

Figure 6 shows the learning processes with RS, RW and NPSP. All algorithms start with randomly initialized RNNs which are used to control the agent within the environment for an episode. At the end of an episode, we obtain the episodic performance as or , which indicates that either the task is solved or not. If the task is not solved, we perform synaptic changes and test again the agent on the task. This process continues for a certain number of episodes , or until the task is solved. In the case of RS, after each episode the network is re-initialized. In the case of RW, the weights of the network are perturbed by Gaussian perturbation with standard deviation as: . Thus, the RS performs random search in the search space, whereas the RW performs a random search within the neighboring networks of the initial network. In the case of the NPSP, we use the evolved rules to perform perturbations.

4. Experimental Results

In this section, we present the results of the agents trained using RS, RW and NPSP rules. The comparisons between the results of the algorithms are performed based on the novelty and distance measures that are explained in Section 3.

Table 3 shows the median of the novelty and distance measures of the agents trained by RS, RW and evolved NPSP rules. The columns labelled as “Goal” and “Second Room” report the number of times the agents were able to achieve the goal and enter into the second room respectively. For all algorithms, the learning process is set to 500 episodes and 12 trials in total (3 trials for 2 starting positions, for 2 environments).

The rows labelled as RS0H, RW0H and NPSP0H show the results of the algorithms on the RNN models without a hidden layer. We observe that RS0H produces more novel behaviors relative to RW0H. This could be expected since RS0H randomly samples from the search space after each episode, whereas RW0H performs iterative perturbations on randomly initialized solutions, thus it performs the search more locally. Consequently, RS0H leads to lower distance measure.

Algorithm Novelty Distance Goal Second Room
RS0H 0.095 1.3974 0 0
RW0H 0.018 1.5032 0 0
NPSP0H 0.3550 0.6786 2 7
RS15H 0.2583 1.3302 0 2
RW15H 0.2228 1.4533 0 0
NPSP15H 0.4110 0.8393 0 8
RS30H 0.4328 1.2856 0 3
RW30H 0.3022 1.2944 0 3
NPSP30H 0.8400 0.8571 1 7
RS50H 0.6300 0.8920 0 7
RW50H 0.5072 1.1606 0 3
NPSP50H 1.0000 0.5179 1 8
Table 3. The median of the novelty and distance measures of agents trained by RS, RW and the evolved best performing NPSP rule.

NPSP0H was selected after six independent evolutionary runs because it produced the highest number of novel behaviors. The agent trained with NPSP0H was able to produce about 177 () novel behaviors on average, and was able to enter into the second room in 7 out of 12 trials.

The rest of the rows shows the comparison results of the networks with hidden layers. Similarly, we performed two independent evolutionary runs on RNNs with 15 hidden neurons and optimized the NPSP rules. We then selected the best NPSP rule, that is NPSP15H, and tested on the RNNs with 15, 30 and 50 hidden neurons. The results are labelled as NPSP15H, NPSP30H and NPSP50H respectively.

We observe quite interestingly that the algorithms produce larger number of novel behaviors when the number of hidden neurons are increased. For instance, RS15H produces about 129 () novel behaviors and RS50H produces about 315 () novel behaviors. On the other hand, the NPSP rule was able to produce much more novel behaviors compared to RS and RW for all sizes of the networks. For instance, NPSP50H was able to produce 500 novel behaviors in 500 episodes and yielded the lowest score for the median of distance to the goal in 12 trials (it also reached the second room in 8 trials).

We noticed that NPSP0H was able to produce competitive results in terms of distance even thought it was not able to produce more novelty than the cases with hidden neurons. This may be due to the “granularity” of the behaviors produced. We would expect the RNNs with hidden layers (especially larger ones) to produce behaviors that are more complicated and detailed due to the large number of parameters that could affect the production of sequences of behaviors. On the other hand, we expect the RNNs without a hidden layer to produce more high level behavior patterns. This can explain why the smaller sized networks (i.e. without hidden layer) could produce less novel behaviors, and yet be successful in finding the behaviors that can get closer to the goal. They can produce high level and less complex behaviors (i.e. bouncing from the walls and following the walls) that may explore the environment. We have recorded several behaviors generated by the NPSP rules on RNN models with and without a hidden layer. Moreover, small changes in weights may lead to smaller behavioral differences relative to the small changes in larger networks, thus, as expected, we observe that a large number of novel behaviors is produced by larger networks, even though the same NPSP rule is used. We recorded a video, available online111A video recording of behaviors found by the NPSP rules using the RNN models with and without a hidden layer, accessible online at: http://bit.ly/2H4IOp5., to illustrate a visual comparison of the successful agent behaviors found by the NPSP rules using the RNN models with and without a hidden layer.

In Figures (a)a and (b)b, we illustrate the novelty and distance trends of six independent evolutionary runs of the NPSP rules optimized using the RNN model without a hidden layer. Since the NPSP rules were selected based on their novelty, their distance trends are not decreasing at all times. Thus, some rules showed better distance but had lower novelty score.

(a) Novelty Trend (b) Distance Trend
Figure 7. The novelty and distance trends of the NPSP rules during 6 independent evolutionary runs.
(a) DM1 Distance Measure (b) DM1 Novelty Measure (c) DM2 Distance Measure (d) DM2 Novelty Measure
Figure 8. The median of the novelty and distance measures of 12 independent trials of the agents trained by NPSP0H. The value in each cell indicates the result when it is set as the starting point. Only the first rooms of the environments are shown since the agents can only start from there. The intensity of the colour indicates the magnitude of the value in each cell.

We further assess the performance of NPSP0H with respect to different starting positions. For that, we assigned each cell in the first room of DM1 and DM2 as the starting point of the agent and used NPSP0H to train the agent for 12 independent trials (each starting from a randomly initialized RNN configuration). Note that the NPSP rules were evolved based on two selected starting points but in this case they are tested on all locations, which may give some insights into the generalizability of their performance. The results are shown in Figure 8 where we show the median of the distance and novelty measures in each cell when it is used as the starting point. We color-coded the figures based on the magnitude of the values in each cell where the intensity of red indicates higher values.

Based on the distance measures shown in Figures (a)a and (c)c, we observe that the agents starting close the wall and behind the obstacle do not seem to get closer to the goal position. Correspondingly, Figures (b)b and (d)d show lower novelty measures in similar areas. On the other hand, the agents that start from the middle area, and locations facing or within the button area, are capable of getting closer to the goal and also have higher novelty measure.

Overall, the agents started from 172 cells in DM1 and DM2. The median result was below 1 (which means that the agents are able to access the second room) in 95 and 120 out of 172 starting points (55.2% and 69.7%) in DM1 and DM2 respectively. This shows that the agents in DM2 were more successful in getting closer to the goal. Therefore, this may indicate that the first environment is more difficult to solve, and/or NPSP0H may have an environmental bias towards DM2.

Figure 9 shows three additional environments (referred to as ENV1, ENV2 and ENV3) that we used to perform additional test on the evolved NPSP rules. These environments were not used during the evolutionary process of the NPSP rules. The environments shown in the first column are the versions with the door closed, while those shown in the second column are the versions that the door opened. Green, blue and red areas indicate the button, the goal and the starting position of the agent.

(a) ENV1 door closed (b) ENV1 door opened (c) ENV2 door closed (d) ENV2 door opened (e) ENV3 door closed (f) ENV3 door opened
Figure 9. Three additional test environments. Green, blue and red show the button, goal and starting position of the agent respectively. Figures (a)a(c)c and (e)e show the versions of the environments where door is closed, while Figures (b)b(d)d and (f)f show the versions of the environments where the door leading to the goal is opened.

Table 4 shows the additional experimental results we obtained on the environments shown in Figure 9. The rows (corresponding to each environment) labelled as “Novelty”, “Distance”, “Second” and “Goal” show respectively the median percentage of the novel behavior, the shortest distance to the goal, the number of times the agent could access the room where the goal is located, and the number of times reached the goal. Each algorithm was tested on each environment for 25 trials.

The results are similar to those obtained in the previous experiments. Overall, larger network sizes produced more novel behaviors. Similarly, NPSP0H shows competitive performance even against the network with largest number of hidden neurons (i.e., NPSP50H).

Environment Algorithm RS0H NPSP0H RS15H NPSP15H RS30H NPSP30H RS50H NPSP50H
ENV1 Novelty 0.08 0.37 0.27 0.34 0.45 0.68 0.63 1.00
Distance 1.38 1.3 1.38 1.38 1.38 0.78 1.38 0.86
Second 1 11 0 9 1 18 8 17
Goal 0 7 0 2 0 1 0 4
ENV2 Novelty 0.10 0.43 0.29 0.48 0.47 0.74 0.64 1.00
Distance 1.30 0.94 1.30 1.30 1.30 1.30 1.30 1.30
Second 1 14 0 0 0 1 0 0
Goal 0 2 0 0 0 0 0 0
ENV3 Novelty 0.09 0.37 0.32 0.35 0.55 0.98 0.73 1.00
Distance 1.40 0 1.40 1.40 1.40 0.60 0.65 0.60
Second 1 20 3 11 6 19 18 20
Goal 1 18 0 2 0 1 0 2
Table 4. The median of the novelty and distance measures of agents tested on additional environments shown in Figure 9.

5. Conclusions and Future Work

In this work, we proposed using synaptic plasticity to allow learning in ANNs for the cases where there is no fitness value or reinforcement signals. We refer to those problems as the “needle in a haystack” due to the difficulty of finding the solutions in a large search space. We proposed an evolutionary approach dubbed as novelty producing synaptic plasticity (NPSP), whose goal is to produce as many novel behaviors as possible and find the behavior that can solve the problem. The NPSP performs synaptic changes based on a data structure (neuron activation traces) that stores pairwise activations of neurons during an episode. We compared the NPSP with random search and random walk algorithms that are analogous to the NPSP except that they perform synaptic changes randomly. Our results show that the information about the pairwise activations of neurons introduced with the NATs helps increase the number of novel behaviors relative to random search and random perturbations.

We tested our algorithms on complex maze-navigation tasks where defining the fitness function is not straightforward. We observed a positive relation between producing novel behaviors and finding a solution in these tasks. We also investigated the generalizability of the NPSP rule by testing them for different starting points and in different environments that were not used for the training. In some starting points/environments, the NPSP was not able to produce as many novel behaviors as it produced in others.

We performed experiments on recurrent neural networks with various sizes. We observed that the networks with a larger number of hidden neurons produced more novel behaviors. However, this did not directly cause a higher chance of finding the goal position. This may be due to the capability of large networks to produce more complex behaviors which may not necessarily lead to efficient (i.e. goal-reaching) exploration behavior patterns in the environment.

There are several interesting research questions we aim to follow starting from this work. First, we may consider the fact that we are not necessarily interested in finding all behavioral patterns, because many of these behaviors may not make sense. For instance, if we want to explore the environment, going front and back or cycling around would not help. It would be interesting to introduce some sort of bias, or constraints, in the generation of certain types of behaviors. However, this may also restrict the search and prevent finding good solutions so a way to guarantee a good compromise between solution novelty and solution efficiency should be investigated.

Second, it may be interesting to use multi-objective optimization to select the NPSP rules based also on their capability of getting closer to the goal. However, this may introduce an environmental bias (this is the main reason we did not use it already this work). To avoid that, the rules may be required to be evaluated in many different environments.

Another interesting research question concerns the synaptic adjustments. Especially in large networks, small adjustments in connections may add up to large behavioral changes. It would be interesting to investigate how to perform these changes to allow behavioral continuity.

Finally, evolutionary computation is a powerful tool to discover different plasticity mechanisms in various learning scenarios. It may be interesting to investigate different plasticity mechanisms and see how they perform synaptic adjustments.

References

  • (1)
  • Auerbach et al. (2016) Joshua E Auerbach, Giovanni Iacca, and Dario Floreano. 2016. Gaining insight into quality diversity. In Proceedings of the 2016 Genetic and Evolutionary Computation Conference Companion. ACM, 1061–1064.
  • Floreano and Urzelai (2000) Dario Floreano and Joseba Urzelai. 2000. Evolutionary robots with on-line self-organization and behavioral fitness. Neural Networks 13, 4-5 (2000), 431–443.
  • Gerstner et al. (2018) Wulfram Gerstner, Marco Lehmann, Vasiliki Liakoni, Dane Corneil, and Johanni Brea. 2018. Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of NeoHebbian Three-Factor Learning Rules. Frontiers in Neural Circuits 12 (2018), 53.
  • Hebb (1949) Donald Olding Hebb. 1949. The organization of behavior: A neuropsychological theory. (1949).
  • Hinton and Nowlan (1996) Geoffrey E Hinton and Steven J Nowlan. 1996. How learning can guide evolution. Adaptive individuals in evolving populations: models and algorithms 26 (1996), 447–454.
  • Holtmaat and Svoboda (2009) Anthony Holtmaat and Karel Svoboda. 2009. Experience-dependent structural synaptic plasticity in the mammalian brain. Nature Reviews Neuroscience 10, 9 (2009), 647–658.
  • Izhikevich (2007) Eugene M. Izhikevich. 2007. Solving the Distal Reward Problem through Linkage of STDP and Dopamine Signaling. Cerebral Cortex 17, 10 (2007), 2443–2452.
  • Kuriscak et al. (2015) Eduard Kuriscak, Petr Marsalek, Julius Stroffek, and Peter G Toth. 2015. Biological context of Hebb learning in artificial neural networks, a review. Neurocomputing 152 (2015), 27–35.
  • Lehman and Stanley (2008) Joel Lehman and Kenneth O Stanley. 2008. Exploiting open-endedness to solve problems through the search for novelty.. In ALIFE. 329–336.
  • Mouret and Clune (2015) Jean-Baptiste Mouret and Jeff Clune. 2015. Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909 (2015).
  • Niv et al. (2002) Yael Niv, Daphna Joel, Isaac Meilijson, and Eytan Ruppin. 2002.

    Evolution of Reinforcement Learning in Uncertain Environments: A Simple Explanation for Complex Foraging Behaviors.

    Adaptive Behavior 10, 1 (April 2002), 5–24.
  • Orchard and Wang (2016) Jeff Orchard and Lin Wang. 2016. The evolution of a generalized neural learning rule. In Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 4688–4694.
  • Pugh et al. (2015) Justin K Pugh, Lisa B Soros, Paul A Szerlip, and Kenneth O Stanley. 2015. Confronting the challenge of quality diversity. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation. ACM, 967–974.
  • Risi and Stanley (2010) Sebastian Risi and Kenneth O Stanley. 2010. Indirectly encoding neural plasticity as a pattern of local rules. In International Conference on Simulation of Adaptive Behavior. Springer, 533–543.
  • Soltoggio et al. (2018) Andrea Soltoggio, Kenneth O Stanley, and Sebastian Risi. 2018. Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks. Neural Networks (2018).
  • Soltoggio and Steil (2013) Andrea Soltoggio and Jochen J. Steil. 2013. Solving the distal reward problem with rare correlations. Neural computation 25, 4 (2013), 940–978.
  • Vasilkoski et al. (2011) Zlatko Vasilkoski, Heather Ames, Ben Chandler, Anatoli Gorchetchnikov, Jasmin Léveillé, Gennady Livitz, Ennio Mingolla, and Massimiliano Versace. 2011. Review of stability properties of neural plasticity rules for implementation on memristive neuromorphic hardware. In Neural Networks (IJCNN), The 2011 International Joint Conference on. IEEE, 2563–2569.
  • Yaman et al. (2019) Anil Yaman, Giovanni Iacca, Decebal Constantin Mocanu, George Fletcher, and Mykola Pechenizkiy. 2019. Learning with Delayed Synaptic Plasticity. In Proceedings of the 2019 Genetic and Evolutionary Computation Conference. Association for Computing Machinery, New York, NY, USA, 152–160. https://doi.org/10.1145/3321707.3321723