Lamarckian Evolution and the Baldwin Effect in Evolutionary Neural Networks

by   P. A. Castillo, et al.

Hybrid neuro-evolutionary algorithms may be inspired on Darwinian or Lamarckian evolu- tion. In the case of Darwinian evolution, the Baldwin effect, that is, the progressive incorporation of learned characteristics to the genotypes, can be observed and leveraged to improve the search. The purpose of this paper is to carry out an exper- imental study into how learning can improve G-Prop genetic search. Two ways of combining learning and genetic search are explored: one exploits the Baldwin effect, while the other uses a Lamarckian strategy. Our experiments show that using a Lamarckian op- erator makes the algorithm find networks with a low error rate, and the smallest size, while using the Bald- win effect obtains MLPs with the smallest error rate, and a larger size, taking longer to reach a solution. Both approaches obtain a lower average error than other BP-based algorithms like RPROP, other evolu- tionary methods and fuzzy logic based methods


page 1

page 2

page 3

page 4


An Analytic Expression of Relative Approximation Error for a Class of Evolutionary Algorithms

An important question in evolutionary computation is how good solutions ...

A Novel Genetic Search Scheme Based on Nature – Inspired Evolutionary Algorithms for Self-Dual Codes

In this paper, a genetic algorithm, one of the evolutionary algorithms o...

The surprising little effectiveness of cooperative algorithms in parallel problem solving

Biological and cultural inspired optimization algorithms are nowadays pa...

Decoder-tailored Polar Code Design Using the Genetic Algorithm

We propose a new framework for constructing polar codes (i.e., selecting...

Modern Evolution Strategies for Creativity: Fitting Concrete Images and Abstract Concepts

Evolutionary algorithms have been used in the digital art scene since th...

Evolutionary Dataset Optimisation: learning algorithm quality through evolution

In this paper we propose a new method for learning how algorithms perfor...

1 Introduction and State of the Art

Hybrid algorithms often implement non-Darwinian ideas, e.g. Lamarckian evolution or the Baldwin effect, where learning influences evolution.

Lamarck’s theory states that the characteristics an individual acquires during its life are passed to the offspring [1]. Thus, the following generation will inherit any acquired or learned characteristic, this mechanism would be responsible for the evolution of species. According to this approach, learning has a great influence on evolution, since all the characteristics learned are passed on to the following generation.

Nevertheless, Baldwin [2] and Waddington [3] argued that this influence is limited to the fact that the individuals with greater learning capacity will adapt better to the environment, and thus will live longer. The longevity they acquire allows them to have more offspring through time, and propagate their abilities. As the number of offspring who have acquired the ability grows, this characteristic becomes part of the genetic code.

These ideas have previously been used by numerous researchers in different approaches:

  • Lamarckian mechanisms in hybrid evolutionary algorithms. Lamarckian theory is today totally discredited from the biological point of view, but it is possible to implement Lamarckian evolution in EAs, so that an individual can modify its genetic code during or after fitness evaluation (its “lifetime”). These ideas have been used by several researchers with particular success in problems where the application of a local search operator obtains a substantial improvement (travelling salesman problem, Gorges-Schleuter [4], Merz and Freisleben [5], Ross [6]). In general, hybrid algorithms are nowadays acknowledged as the best solution to a wide array of optimization problems.

  • Studying the Baldwin effect in hybrid algorithms [7, 8, 9, 10, 11]. Some authors have studied the Baldwin effect, carrying out a local search on certain individuals to improve their fitness without modifying the genetic code of the individual. This is the strategy proposed by Hinton and Nowlan in [7], who found that learning alters the shape of the search space in which evolution operates and that the Baldwin effect allows learning organisms to evolve much faster than their nonlearning equivalents, even though the characteristics acquired by the phenotype are not communicated to the genotype. Ackley and Littman [10] studied the Baldwin effect in an artificial life system, obtaining the result that experiments in which the individuals had learning capabilities obtained the best results. Boers et al. [11] describe a hybrid algorithm to evolve ANN architectures, whose effectivity is explained with the Baldwin effect, implemented not as a process of learning in the network, but changing the network architecture as part of the learning process.

  • Comparative studies of Lamarckian mechanisms and the Baldwin effect in hybrid algorithms. Some studies have investigated whether a strategy based on a hybrid algorithm that takes advantage of the Baldwin effect is better or worse than one implementing Lamarckian mechanisms to accelerate the search [12]. The results obtained are different, and very dependent on the problem. Gruau and Whitley [13]

    compared Baldwinian, Lamarckian and Darwinian mechanisms implemented in a genetic algorithm that evolves ANNs, finding that the first and the second strategies are equally effective for solving their problem. Nevertheless, for another problem, the results obtained by Whitley et al.

    [14] show that taking advantage of the Baldwin effect can find the global optimum, while a Lamarckian strategy, although faster, usually converges to a local optimum.

    On the other hand, results obtained by Ku and Mak [15]

    with a GA designed to evolve recurrent neural networks, show that the use of a Lamarckian strategy implies an improvement of the algorithm, while the Baldwin effect does not. In Houck et al.

    [16] several algorithms are studied, and similar conclusions drawn, as in [17], where a comparison between the Darwinian, Baldwinian and Lamarckian mechanisms, applied to the 4-cycle problem, is made.

G-Prop (a genetic evolution of BP trained MLP), used in this paper to tune learning parameters and to set the initial weights and hidden layer size of a MLP, searches for the optimal set of weights, the optimal topology and learning parameters, using an EA and Quick-Propagation [18]

(QP). In this method no ANN parameters have to be set by hand; it obviously needs to set the EA constants, but is robust enough to obtain good results under the default parameter settings (all operators applied with the same probability, 300 generations and 200 individuals in the population).

This paper carries out a study of the Baldwin effect in the G-Prop [19, 20, 21, 22] method to solve pattern classification and function approximation problems. We compare results with those of other authors, and intend to check the results obtained by Gruau and Whitley [13], i.e., that the use of learning that modifies fitness without modifying the genetic code improves the task of finding an ANN to solve the problem at hand.

We compare the results obtained taking advantage of the Baldwin effect with those obtained using a Lamarckian local search mechanism. We will also compare them with other non-hybrid (RPROP [23]), hybrid algorithms, and those based on fuzzy logic, to prove that both versions of G-Prop obtain better results (or at least comparable) than other methods, although one of these versions is more likely to be trapped at a local optimum due to the fact that it uses a local search genetic operator.

The remainder of this paper is structured as follows: Section 2 presents the new fitness functions designed to determine if the Baldwin effect takes place in G-Prop. Section 3 describes the experiments, Section 4 presents the results obtained, followed by a brief conclusion in Section 5.

2 The G-Prop Algorithm

In this section we will only describe the new fitness functions designed to determine if the Baldwin effect takes place in G-Prop. The complete description of the method and results on classification problems have been presented elsewhere [19, 20, 21, 22].

In G-Prop, the Darwinian fitness function

is given by the classification / approximation ability obtained when carrying out the validation after training, and in the case of two individuals with identical ability the best is the one that has a hidden layer with fewer neurons, which implies greater speed when training and classifying and facilitates its hardware implementation.

The classification accuracy or number of hits is obtained by dividing the number of hits among the total number of examples in the validating set. The approximation ability is obtained using the normalized mean squared error (NMSE) given by:


where is the real output for the example , is the obtained output, and is the mean of all the real outputs.

The Lamarckian approach uses no special fitness function; instead, a local search genetic operator (QP application) has been designed to improve the individuals, saving the individual trained weights (acquired characteristics) back to the population.

On the other hand, the Baldwin effect requires some type of learning to be applied to the individuals, and the changes (trained weights) are not codified back to the population. In order to take advantage of the Baldwin effect, the following fitness function is proposed: firstly the classification/approximation ability on the validation set of the individual before being trained is calculated. Then it is trained and its ability (after training) is calculated. Three criteria are used to decide which is the best individual: the best MLP is that with higher classification/approximation ability after training; if both MLPs show the same accuracy, then the best is that whose ability before training is higher (the MLP is more likely to have a high accuracy when trained); if both MLPs show the same accuracy before and after training, then the best is the smallest one.

3 Experiments

The algorithm was run for a fixed number of generations. When training each individual of the population to obtain its fitness, a limit of epochs was established. We used 300 generations and 200 individuals in the population in every run, and 200 training epochs in order to avoid long simulation times and also to avoid overfitted networks, making the EA carry out the search and the training operator refine the solutions. In addition, the number of epochs chosen was much smaller than that necessary to train a single MLP, so that the time taken to find a suitable network to solve the problem is similar to that would be needed to train a MLP (that obtains similar results) using a method based on gradient descent. After an exhaustive test of genetic operators, we have considered to apply them with the same priority (see

[19, 20, 21, 22]). The learning operator (see [21, 22]) was only used when obtaining the results of the Lamarckian approach.

The tests used to assess the accuracy of a method must be chosen carefully, because some of them (exclusive-or problem) are not suitable for certain capacities of the BP algorithm, such as generalization [24]. Our opinion, along with Prechelt [25], is that to test an algorithm, at least two real world problems should be used.

We have used a pattern classification problem and a function approximation problem, in order to demonstrate the capacities of the proposed method solving different kind of problems, and also to show that the Baldwin effect takes place whatever the problem at hand is.

In these experiments, the Glass1a (extracted from Proben1 data sets) pattern classification problem, proposed by Prechelt [25] and used by Grönroos [26], is used, as well as the function approximation problem given by equation (2).

Glass1a is a problem of classification of glass types, taken from [25]

. The results of chemical analysis of glass splinters (percent content of 8 different elements) plus the refractive index are used to classify the sample as either float processed or non float processed building windows, vehicle windows, containers, tableware, or head lamps. This task is motivated by forensic needs in criminal investigation. This dataset was created based on the glass problem dataset from the UCI repository of machine learning databases ( mlearn/MLRepository.html). The data set contains 214 instances. Each sample has 9 attributes plus the class attribute: refractive index, sodium, magnesium, aluminium, silicon, potassium, calcium, barium, iron, class attribute (type of glass).

Function given by equation (2) is an analytical function gathered by Cherkassky [27] and Sugeno [28], and used by Pomares [29] their research on fuzzy logic function approximation:


where .

4 Results

Figure 1 shows average results over all the runs for both Lamarckian and Baldwinian approaches to classification and approximation problems. Plotted data correspond to the best individual in the population for each generation. The dotted line corresponds to the classification / approximation ability before training, while the dashed line corresponds to the classification / approximation ability after training, in the Baldwinian approach. The solid line corresponds to the Lamarckian approach.

Standard deviation values have not been plotted due to the fact that those values remain constant along the generations; in any case, they show that error achieved is similar.

Figure 1: Average results over all the runs for both Lamarckian and Baldwinian approaches for the Glass problem. Average error is plotted above (on a vertical logscale) and size below.

Using Lamarckian evolution a suitable MLP is found in the early epochs of the simulation. However that MLP remains the best during the simulation (evolution stops) because of the use of an “elitist” algorithm, and tends to dominate the population due to its high fitness.

On the other hand, using the Baldwin effect, results can be as good as using Lamarckian evolution, although the method needs many more generations and the evolution of the population is much more progressive during the simulation. Results in size show that using the Lamarckian approach, MLPs are smaller than with the Baldwinian approach.

The method exhibits roughly the same behaviour on function approximation problem .

Although it is not the aim of this paper to compare the G-Prop method with those of other authors, we do so in order to prove the capacity of both versions of G-Prop to solve pattern classification and function approximation problems, and how it outperforms other methods.

Tables 1 and 2 show the average error rate, the average size of nets as the number of parameters, that is, the number of weights of the net, and the average number of generations until the best one for that run is found.

The results for the Glass1a pattern classification problem ( of error in test), obtained using the Lamarckian mechanism are compared with those obtained taking advantage of the Baldwin effect and those obtained by Prechelt [25] (using RPROP [23, 30]) and Grönroos [26] (using a hybrid algorithm) in Table 1.

Approach Error Size Generations
Lamarckian 32 2 59 28 52 54
Baldwinian 31 2 112 62 119 55

33 5 350 -
Grönroos 32 5 350 -

Table 1: Results for the Glass1a problem obtained with G-Prop taking advantage of the Baldwin effect and for the Lamarckian approach, as well as those obtained by Prechelt and Grönroos, which are included for the sake of comparison.

It is evident that G-Prop outperforms other methods (both in classification accuracy and network size obtained): Prechelt [25] using RPROP [23, 30] obtained a classification accuracy of , and Grönroos [26] using Kitano’s network obtained ; while G-Prop achieves an error of using the Lamarckian approach and taking advantage of the Baldwin effect.

In the case of the configuration that verifies if the Baldwin effect takes place in G-Prop, the classification ability obtained is greater, although the size is greater and more generations are needed to reach similar results.

The results for the function approximation problem, obtained using the Lamarckian mechanism, are compared with those obtained taking advantage of the Baldwin effect and those obtained by Pomares [29] in Table 2.

Approach Error Size Generations
Lamarckian 0.09 0.01 18 8 34 28
Baldwinian 0.086 0.004 85 27 97 81

Pomares [29]
0.125 6 (4 rules) -

Table 2: Results for the problem obtained with G-Prop taking advantage of the Baldwin effect and for the Lamarckian approach, as well as those obtained by Pomares, which are included for the sake of comparison.

The proposed method obtains better results on approximation ability ( versus ) although the networks obtained are greater in size and number of parameters than those obtained using fuzzy controllers ( versus ).

The approximation ability obtained is greater using the configuration that verifies if the Baldwin effect takes place in G-Prop, while the network sizes are slightly larger and need more generations to reach similar results.

Each run of the proposed method takes about 4 hours on an AMD-K7(tm) 600Mhz, using the parameters described above.

The Lamarckian strategy achieves good enough results using, on average, fewer generations, although the Baldwinian strategy, using a suitable number of generations, can achieve the same or even better results.

The results obtained show that if the problem does not have many local minima and results must be obtained quickly, the best strategy is the Lamarckian. Otherwise, the Baldwinian strategy or a mixture of both is the best.

5 Conclusions

A study of the Baldwin effect in the G-Prop method [19, 20, 21, 22] (a hybrid algorithm to tune learning parameters, initial weights and hidden layer size of a MLP using an EA and QP) has been carried out. A comparison between the results obtained taking advantage of the Baldwin effect and those obtained using a local search Lamarckian mechanism has been made.

The results obtained, agree with those presented by Whitley et al. [14], and show that the use of a Lamarckian strategy makes the method obtain good solutions faster than if the Baldwin effect is used, although it is more likely to be trapped in a local optimum than the approach that takes advantage of the Baldwin effect. However, errors are not significatively worse.

Figures show how a Lamarckian strategy finds a suitable MLP in the early generations which remains the best during the simulation (evolution stops); with the Baldwin effect, results can be as good as those of Lamarckian evolution, although the method needs many more generations and the evolution is much more progressive.

It should be observed also that neural nets obtained using a Lamarckian strategy are smaller, which contributes to learning speed. Besides a small network is fast while training and classifying, and obtaining it in fewer generations means less time is needed to design it.

Another interesting result is that when the Lamarckian training operator is used, learning contributes more to fitness improvement at the beginning of the simulations [31]. This is due to to the use of an elitist algorithm, so that when it is applied to a MLP, these individuals can obtain an advantage in relation to the remaining members of the population, and will continue to be the best individuals among the population until the end of the simulation. This can be also proved using visualization techniques [32].


This work has been supported in part by CICYT TIC99-0550, INTAS 97-30950, HPRN-CT-2000-00068 and IST-1999-12679.


  • [1] J.B. Lamarck, “Philosophie zoologique,” 1809.
  • [2] J.M. Baldwin, “A new factor in evolution,” American Naturalist 30, 441-451, 1896.
  • [3] C.H. Waddington, “Canalization of development and the inheritance of acquired characteristics,” Nature, 3811, pp. 563-565, 1942.
  • [4] M. Gorges-Schleuter, “Asparagos96 and the traveling salesman problem,”

    In Proceedings of 1997 IEEE International Conference on Evolutionary Computation, pp. 171-174. IEEE

    , 1997.
  • [5] P. Merz; B. Freisleben, “Genetic local search for the TSP: New results,” In Proceedings of 1997 IEEE International Conference on Evolutionary Computation, pp. 159-163. IEEE, 1997.
  • [6] B.J. Ross, “A Lamarckian evolution strategy for genetic algorithms,” In Lance D. Chambers, editor, Practical Handbook of Genetic Algorithms: Complex Coding Systems, volume III, pp. 1-16. Boca Raton, FL: CRC Press, 1999.
  • [7] G.E. Hinton and S.J. Nowlan, “How learning can guide evolution,” Complex Systems, 1, 495-502, 1987.
  • [8] R.K. Belew, “When both individuals and populations search: Adding simple learning to the genetic algorithm.,” In 3th Intern. Conf. on Genetic Algorithms, D. Schaffer, ed., Morgan Kaufmann, 1989.
  • [9] I. Harvey, “The puzzle of the persistent question marks: a case study of genetic drift,” In 5th International Conference on Genetic Algorithms, pp. 15-22, S. Forrest, ed. Morgan Kaufmann, 1993.
  • [10] D.H. Ackley and M. Littman, “Interactions between learning and evolution,” In C.G. Langton, C. Taylor, J.D. Garmer, and S. Rasmussen (Editors), Artificial Life II, 487-507, Addison-Wesley, Reading, MA, 1992.
  • [11] E.J.W. Boers; M.V. Borst; I.G. Sprinkhuizen-Kuyper, “Evolving Artificial Neural Networks using the ”Baldwin Effect”,” In D.W. Pearson, N.C. Steele and R.F. Albrecht (eds.); Artificial Neural Nets and Genetic Algorithms. Proceedings of the International Conference in Alès, France, 333-336, Springer Verlag Wien New York, 1995.
  • [12] M. Huesken; J.E. Gayko; b. Sendhoff, “Optimization for Problem Classes - Neural Networks that Learn to Learn,” Proceedings of the First IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks (ECNN 2000), IEEE Press, 2000.
  • [13] F. Gruau and D. Whitley, “Adding learning to the cellular development of neural networks: Evolution and the Baldwin efect,” Evolutionary Computation, Volume I, No. 3, pp. 213-233, 1993.
  • [14] D. Whitley; V.S. Gordon; K. Mathias, “Lamarckian Evolution, The Baldwin Effect and Function Optimization,” Parallel Problem Solving from Nature-PPSN III. Y. Davidor, H.P. Schwefel and R. Manner, eds. pp. 6-15. Springer-Verlag, 1994.
  • [15] K.W.C. Ku and M.W. Mak, “Exploring the effects of Lamarckian and Baldwinian learning in evolving recurrent neural networks,” In Proceedings of 1997 IEEE International Conference on Evolutionary Computation, pp. 159-163. IEEE, 1997.
  • [16] C. Houck; J.A. Joines; M.G. Kay; J.R. Wilson, “Empirical investigation of the benefits of partial Lamarckianism,” Evolutionary Computation, v.5, n.1, pp. 31-60, 1997.
  • [17] B.A. Julstrom, “Comparing Darwinian, Baldwinian and Lamarckian Search in a Genetic Algorithm for the 4-Cycle Problem,” In Congress on Evolutionary Computation,In Genetic and Evolutionary Computation Conference, Late Breaking Papers, pp. 134-138, Orlando, USA, 1999.
  • [18] S.E. Fahlman, “Faster-Learning Variations on Back-Propagation: An Empirical Study,” Proceedings of the 1988 Connectionist Models Summer School, Morgan Kaufmann, 1988.
  • [19] P.A. Castillo, J. González, J.J. Merelo, V. Rivas, G. Romero, and A. Prieto,

    “SA-Prop: Optimization of Multilayer Perceptron Parameters using Simulated Annealing,”

    Lecture Notes in Computer Science, ISBN:3-540-66069-0, Vol. 1606, pp. 661-670, Springer-Verlag, 1999.
  • [20] P.A. Castillo, J.J. Merelo, V. Rivas, G. Romero, and A. Prieto, “G-Prop: Global Optimization of Multilayer Perceptrons using GAs,” Neurocomputing, Vol.35/1-4, pp.149-163, 2000.
  • [21] P.A. Castillo, J. Carpio, J.J. Merelo, V. Rivas, G. Romero, and A. Prieto, “Evolving Multilayer Perceptrons,” Neural Processing Letters, vol. 12, no. 2, pp.115-127. October, 2000.
  • [22] P.A. Castillo, M.G. Arenas, J.G. Castellano, M.Cillero, J.J. Merelo, A. Prieto, V. Rivas, and G. Romero, “Function Approximation with Evolved Multilayer Perceptrons,”

    Advances in Neural Networks and Applications. Artificial Intelligence Series. Nikos E. Mastorakis Editor. ISBN:960-8052-26-2, pp.195-200, Published by World Scientific and Engineering Society Press

    , 2001.
  • [23] M. Riedmiller and H. Braun,

    “A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm,”

    In Ruspini, H., (Ed.) Proc. of the ICNN93, San Francisco, pp. 586-591, 1993.
  • [24] S. Fahlman, “An empirical study of learning speed in back-propagation networks,” Tech. Rep., Carnegie Mellon University, 1988.
  • [25] Lutz Prechelt, “PROBEN1 — A set of benchmarks and benchmarking rules for neural network training algorithms,” Technical Report 21/94, Fakultät für Informatik, Universität Karlsruhe, D-76128 Karlsruhe, Germany. (Also in:, 1994.
  • [26] M.A. Grönroos, “Evolutionary Design of Neural Networks,” Master of Science Thesis in Computer Science. Department of Mathematical Sciences. University of Turku., 1998.
  • [27] V. Cherkassky; D. Gehring; F. Mulier,

    “Comparison of adaptive methods for function estimation from samples,”

    IEEE Trans. Neural Networks, vol. 7, no. 4, pp. 969-984, 1996.
  • [28] M. Sugeno and T. Yasukawa, “A fuzzy-logic based approach to qualitative modeling,” IEEE Fuzzy Sets and Systems, vol. 1, no. 1, 1993.
  • [29] H. Pomares Cintas, “Nueva metodología para el diseño automático de sistemas difusos,” Tesis Doctoral. Departamento de Arquitectura y Tecnología de Computadores. Universidad de Granada, 1999.
  • [30] M. Riedmiller, “Description and Implementation Details,” Tech. Rep., University of Karlsruhe, 1994.
  • [31] M. Oliveira; J. Barreiros; E. Costa; F. Pereira, “LamBaDa: An Artificial Environment to Study the Interaction beween Evolution and Learning,” In Congress on Evolutionary Computation, Volume I, pp. 145-152, Washington D.C., USA, 1999.
  • [32] G. Romero, M.G. Arenas, J. Carpio, J.G. Castellano, P.A. Castillo, J.J. Merelo, A. Prieto, and V. Rivas, “Evolutionary Computation Visualization: Application to G-Prop,” Lecture Notes in Computer Science, ISBN:3-540-41056-2, Vol.1917, pp.902-912. September, 2000.