Publisher's Synopsis
This book presents a systematic approach to parallel implementation of feedforward neural networks on an array of transputers. The emphasis is on backpropagation learning and training set parallelism. Using systematic analysis, a theoretical model has been developed for the parallel implementation. The model is used to find the optimal mapping to minimize the training time for large backpropagation neural networks. The model has been validated experimentally on several well known benchmark problems. Use of genetic algorithms for optimizing the performance of the parallel implementations is described. Guidelines for efficient parallel implementations are highlighted.