Comparison between Lamarckian evolution and Baldwin evolution of Neural Network

Abstract

Genetic algorithms are very efficient at exploring the entire search space; however, they are relatively poor at finding the precise local optimal solution in the region at which the algorithm converges. Hybrid genetic algorithms are the combination of learning algorithm (Backpropagation), usually working as evaluation functions, and genetic algorithms. There are two basic strategies in using hybrid GAs, Lamarckian and Baldwinian evolution. Traditional schema theory does not support Lamarckian learning, i.e., forcing the genetic representation to match the solution found by the learning algorithm. However, Lamarckian learning does alleviate the problem of multiple genotypes mapping to the same phenotype. Baldwinian learning uses learning algorithm to change the fitness landscape, but the solution that is found is not encoded back into the genetic string. We presented hybrid genetic algorithms for optimizing weights as well as the topology of artificial neural networks, by introducing the concepts of Lamarckian and Baldwin evolution effects. Experimental results with extensive set of experiments show that the hybrid genetic algorithm exploiting the Baldwin effect more effect than Lamarckian evolution but is slow in convergence, and The results of the proposed algorithms outperformed those of the previous algorithms.

Keywords

Neural Network