Implementation of Genetic Algorithm on Distributed Memory

Abstract

This paper focuses on implementing genetic algorithm on distributed memory. This parallel computer can be programmed using MPI standard message passing interface. We use a DAG to model parallel computation. The work is based on priority task graph representation of jobs that contain sequential segments with varying dependencies. We consider compile-time static scheduling when communication overhead is not negligible. DSC is used to cluster tasks and then use a load balancing and physical mapping heuristic to map the clusters onto processors. The main optimization issues are balancing computation among processors, reducing inter-processor communication and overlapping communication with computation. Theoretical and experimental results are presented to verify the performance of these algorithms.