Abstract
This paper presents a new neural network training scheme for pattern recognition applications. Our training technique is a hybrid scheme which involves, firstly, the use of the efficient BFGS optimisation method for locating minima of the total error function and, secondly, the use of genetic algorithms for finding a global minimum. This paper also describes experiments that compare the performance of our scheme with three other hybrid schemes of this kind when applied to challenging pattern recognition problems. Experiments have shown that our scheme gives better results than others.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Rumelhart DE, McClelland JL.Learning internal representation by error propagation. In:Parallel Distributed Processing: Explorations in the Micro structure of Cognition, Vol. 1, MIT Press, Cambridge, MA, 1986, 318–362
Hammerstrom D. Working with neural networks.IEEE Spectrum 1993; 30(7): 46–53
Watrous RL. Learning algorithms for connectionist networks: applied gradient methods of nonlinear optimization.Proc IEEE First Int Conf Neural Network 1987: 619–627
Kinsella JA. Comparison and evaluation of variants of the conjugate gradient method for efficient learning in feed-forward neural networks with backward error propagation.Network 1992; 3: 27–35
Schnabel RB, Dennis Jr JE.Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice Hall, 1983
Setiono R, Hui LCK. Use of a quasi-Newton method in a feedforward neural network construction algorithm.IEEE Trans Neural Networks 1995; 6: 273–277
Battiti R, Masulli F. BFGS Optimization for faster and automated supervised learning.Int Neural Network Conf (INNC-90), Vol. 2, Paris, 1990; 757–760
Goldberg DE. Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Reading, MA, 1989
Ackley DA, Littman ML. Interactions between learning and evolution. In: Langton CGet al. (eds),Artificial Life II, Vol X, Addison-Wesley, Redwood City, CA, 1991, 487–509
Whitley D. A Genetic Algorithm Tutorial.Technical Report CS.-93-103 (Revised), Colorado State University, 1993
Kitano H. Empirical studies on the speed of convergence of neural networks training using genetic algorithms.Eighth Nat Conf Artificial Intelligence (AAAI-90), MIT Press, Cambridge, MA, 1990, 789–795
Belew RK, McInerney J, Schraudolph NN. Evolving networks: using the genetic algorithms with connectionist learning. In: Langton GCet al. (eds),Artificial Life II, Vol X, Addison-Wesley, Redwood City, CA, 1991, 511–547
Baba N. Utilization of stochastic automata and genetic algorithms for neural networks learning. In: Manner BMR (ed),Parallel Problem Solving from Nature 2, 1992, 432–440
Petridis V, Kazarlis S, Papalainomou A, Filelis A. A hybrid genetic algorithm for training neural networks. In: Aleksander J, Taylor J (eds),Artificial Neural Networks 2, Elsevier, 1992, 953–956
McInerney M, Dhawan AP. Use of genetic algorithms with back propagation in training of feed-forward neural networks.IEEE Int Conf Neural Networks 1993; 2: 203–208
Chea CW. A hybrid method for feed-forward neural network training.H.Y.P. Report No. 94HCHEACW, Department of Information Systems and Computer Science, National University of Singapore, 1994
Lang KJ, Witbrock MJ. Faster learning variations on back propagation: an empirical study. In: Touretzky DS, Hinton GE, Sejnowski TJ (eds),Proc Connectionist Models Summer School, Morgan Kaufmann, San Mateo, 1988, 52–59
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Hui, L.C.K., Lam, KY. & Chea, C.W. Global optimisation in neural network training. Neural Comput & Applic 5, 58–64 (1997). https://doi.org/10.1007/BF01414103
Issue Date:
DOI: https://doi.org/10.1007/BF01414103