Skip to main content

Model Complexity of Neural Networks in High-Dimensional Approximation

  • Chapter
Recent Advances in Intelligent Engineering Systems

Part of the book series: Studies in Computational Intelligence ((SCI,volume 378))

  • 1129 Accesses

Abstract

The role of dimensionality in approximation by neural networks is investigated. Methods from nonlinear approximation theory are used to describe sets of functions which can be approximated by neural networks with a polynomial dependence of model complexity on the input dimension. The results are illustrated by examples of Gaussian radial networks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Barron, A.R.: Neural net approximation. In: Narendra, K. (ed.) Proc. 7th Yale Workshop on Adaptive and Learning Systems. Yale University Press, London (1992)

    Google Scholar 

  2. Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory 39, 930–945 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  3. Darken, C., Donahue, M., Gurvits, L., Sontag, E.: Rate of approximation results motivated by robust neural network learning. In: Proceedings of the Sixth Annual ACM Conference on Computational Learning Theory, pp. 303–309. The Association for Computing Machinery, New York (1993)

    Chapter  Google Scholar 

  4. Girosi, F.: Approximation error bounds that use VC- bounds. In: Proceedings of ICANN 1995, Paris, pp. 295–302 (1995)

    Google Scholar 

  5. Gurvits, L., Koiran, P.: Approximation and learning of convex superpositions. J. of Computer and System Sciences 55, 161–170 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  6. Ito, Y.: Representation of functions by superpositions of a step or sigmoid function and their applications to neural network theory. Neural Networks 4, 385–394 (1991)

    Article  Google Scholar 

  7. Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Annals of Statistics 20, 608–613 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  8. Kůrková, V.: Dimension-independent rates of approximation by neural networks. In: K. Warwick, M. Kárný (eds.) Computer-Intensive Methods in Control and Signal Processing. The Curse of Dimensionality, pp. 261–270. Birkhäuser (1997)

    Google Scholar 

  9. Kainen, P.C., Kůrková, V., Sanguineti, M.: Complexity of Gaussian radial basis networks approximating smooth functions. J. of Complexity 25, 63–74 (2009)

    Article  MATH  Google Scholar 

  10. Kainen, P.C., Kůrková, V., Sanguineti, M.: On tractability of neural-network approximation. In: Kolehmainen, M., Toivanen, P., Beliczynski, B. (eds.) ICANNGA 2009. LNCS, vol. 5495, pp. 11–21. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  11. Kainen, P.C., Kůrková, V., Vogt, A.: A Sobolev-type upper bound for rates of approximation by linear combinations of Heaviside plane waves. J. of Approximation Theory 147, 1–10 (2007)

    Article  MATH  Google Scholar 

  12. Kůrková, V.: High-dimensional approximation and optimization by neural networks. In: Suykens, J., Horváth, G., Basu, S., Micchelli, C., Vandewalle, J. (eds.) Advances in Learning Theory: Methods, Models and Applications. ch. 4, pp. 69–88. IOS Press, Amsterdam (2003)

    Google Scholar 

  13. Kůrková, V.: Minimization of error functionals over perceptron networks. Neural Computation 20, 252–270 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  14. Kůrková, V.: Model complexity of neural networks and integral transforms. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds.) ICANN 2009. LNCS, vol. 5768, pp. 708–717. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  15. Kůrková, V., Kainen, P.C., Kreinovich, V.: Estimates of the number of hidden units and variation with respect to half-spaces. Neural Networks 10, 1061–1068 (1997)

    Article  Google Scholar 

  16. Kůrková, V., Savický, P., Hlaváčková, K.: Representations and rates of approximation of real–valued Boolean functions by neural networks. Neural Networks 11, 651–659 (1998)

    Article  Google Scholar 

  17. Mhaskar, H.N.: On the tractability of multivariate integration and approximation by neural networks. J. of Complexity 20, 561–590 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  18. Park, J., Sandberg, I.: Universal approximation using radial–basis–function networks. Neural Computation 3, 246–257 (1991)

    Article  Google Scholar 

  19. Park, J., Sandberg, I.: Approximation and radial basis function networks. Neural Computation 5, 305–316 (1993)

    Article  Google Scholar 

  20. Pisier, G.: Remarques sur un résultat non publié de B. Maurey. In: Séminaire d’Analyse Fonctionnelle 1981, vol. I (12). École Polytechnique, Centre de Mathématiques, Palaiseau, France (1981)

    Google Scholar 

  21. Stein, E.M.: Singular Integrals and Differentiability Properties of Functions. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  22. Traub, J.F., Werschulz, A.G.: Complexity and Information. Cambridge University Press, Cambridge (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Kůrková, V. (2012). Model Complexity of Neural Networks in High-Dimensional Approximation. In: Fodor, J., Klempous, R., Suárez Araujo, C.P. (eds) Recent Advances in Intelligent Engineering Systems. Studies in Computational Intelligence, vol 378. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23229-9_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-23229-9_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-23228-2

  • Online ISBN: 978-3-642-23229-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy