Skip to main content
Log in

Global optimisation in neural network training

  • Articles
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

This paper presents a new neural network training scheme for pattern recognition applications. Our training technique is a hybrid scheme which involves, firstly, the use of the efficient BFGS optimisation method for locating minima of the total error function and, secondly, the use of genetic algorithms for finding a global minimum. This paper also describes experiments that compare the performance of our scheme with three other hybrid schemes of this kind when applied to challenging pattern recognition problems. Experiments have shown that our scheme gives better results than others.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Rumelhart DE, McClelland JL.Learning internal representation by error propagation. In:Parallel Distributed Processing: Explorations in the Micro structure of Cognition, Vol. 1, MIT Press, Cambridge, MA, 1986, 318–362

    Google Scholar 

  2. Hammerstrom D. Working with neural networks.IEEE Spectrum 1993; 30(7): 46–53

    Google Scholar 

  3. Watrous RL. Learning algorithms for connectionist networks: applied gradient methods of nonlinear optimization.Proc IEEE First Int Conf Neural Network 1987: 619–627

  4. Kinsella JA. Comparison and evaluation of variants of the conjugate gradient method for efficient learning in feed-forward neural networks with backward error propagation.Network 1992; 3: 27–35

    Google Scholar 

  5. Schnabel RB, Dennis Jr JE.Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice Hall, 1983

  6. Setiono R, Hui LCK. Use of a quasi-Newton method in a feedforward neural network construction algorithm.IEEE Trans Neural Networks 1995; 6: 273–277

    Google Scholar 

  7. Battiti R, Masulli F. BFGS Optimization for faster and automated supervised learning.Int Neural Network Conf (INNC-90), Vol. 2, Paris, 1990; 757–760

    Google Scholar 

  8. Goldberg DE. Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Reading, MA, 1989

    Google Scholar 

  9. Ackley DA, Littman ML. Interactions between learning and evolution. In: Langton CGet al. (eds),Artificial Life II, Vol X, Addison-Wesley, Redwood City, CA, 1991, 487–509

    Google Scholar 

  10. Whitley D. A Genetic Algorithm Tutorial.Technical Report CS.-93-103 (Revised), Colorado State University, 1993

  11. Kitano H. Empirical studies on the speed of convergence of neural networks training using genetic algorithms.Eighth Nat Conf Artificial Intelligence (AAAI-90), MIT Press, Cambridge, MA, 1990, 789–795

    Google Scholar 

  12. Belew RK, McInerney J, Schraudolph NN. Evolving networks: using the genetic algorithms with connectionist learning. In: Langton GCet al. (eds),Artificial Life II, Vol X, Addison-Wesley, Redwood City, CA, 1991, 511–547

    Google Scholar 

  13. Baba N. Utilization of stochastic automata and genetic algorithms for neural networks learning. In: Manner BMR (ed),Parallel Problem Solving from Nature 2, 1992, 432–440

  14. Petridis V, Kazarlis S, Papalainomou A, Filelis A. A hybrid genetic algorithm for training neural networks. In: Aleksander J, Taylor J (eds),Artificial Neural Networks 2, Elsevier, 1992, 953–956

  15. McInerney M, Dhawan AP. Use of genetic algorithms with back propagation in training of feed-forward neural networks.IEEE Int Conf Neural Networks 1993; 2: 203–208

    Google Scholar 

  16. Chea CW. A hybrid method for feed-forward neural network training.H.Y.P. Report No. 94HCHEACW, Department of Information Systems and Computer Science, National University of Singapore, 1994

  17. Lang KJ, Witbrock MJ. Faster learning variations on back propagation: an empirical study. In: Touretzky DS, Hinton GE, Sejnowski TJ (eds),Proc Connectionist Models Summer School, Morgan Kaufmann, San Mateo, 1988, 52–59

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hui, L.C.K., Lam, KY. & Chea, C.W. Global optimisation in neural network training. Neural Comput & Applic 5, 58–64 (1997). https://doi.org/10.1007/BF01414103

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01414103

Keywords

Navigation

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy