Abstract
An approximation of the gaussian model for density estimation in high-dimension data spaces is presented. The work is mainly motived by the need for a numerically tractable model in high dimension data spaces. The characteristic of the model is to restrict to the Principal Component of each local covariance matrix with the advantage of still being a probabilistic model. The likelihood of the local density is studied and an iterative algorithm is next proposed so as to learn the model. This latter is an adaptation of the well-known iterative Generalized Hebbian Algorithm. A comparison is done with related works based on Factorial Analysis. First experiments on handwritten digits are also reported.
Chapter PDF
References
Dempster, A. P., Laird N. M. and Rubin, D. B. (1977), āMaximum Likelihood from incomplete data via the EM algorithmā, J. Royal Stat. Soc. B45(1), 47ā50.
Fukunaga K., (1990), āstatistical pattern recognitionā, second edition, Academic Press.
Ghahramani Z., Hinton G. E. (1997), āThe EM algorithm for mixture of factorial analyzersā, University of Toronto CRG-TR-96-1. ftp://ftp.cs.toronto.edu/pub/zoubin/tr-96-l.ps.gz
Hinton G. E., Dayan P., Revow M., (1997), āModelling the Manifolds of Images of Handwritten Digitsā, IEEE Transaction in neural networks, vol. 8, no 1, p65ā74.
Jordan M. I., Xu L. (1996), āConvergences Results for the EM approach to Mixtures of Experts Architexturesā, Neural Networks, Vol.8, no 9, 1409ā1431.
Kambhatla N., Leen T. K. (1995), ā Classifying with Gaussian Mixtures and clustersā, in Advances in Neural Information Processing Systems 7.
LemariĆ© B., Gilloux M. and Leroux M.(1996), āHandwritten Word recognition using Contextual Hybrid RBF networks/Hidden Markov Modelsā, in Advances in Neural Information Processing Systems 8.
LemariĆ© B.,.(1998), āMĆ©langes de gaussiennes en composantes principales pp our ('estimation de densitĆ©ā, in Proceedings of RFIA, AFCET,France,1'Ih://ftp.srt-paste.frlpub/rmo/bernarcViiu98.ps.7..
Mao J., Jain A. K. (1995), Artificial neural networks for feature extraction and multivariate data projectionā, IEE transactions in neural networks, vol 6, no 2, 1995.
Neal, R. N., Hinton, G. E. (1993), āA new view of the EM algorithm that justifies Incremental and other variantsā, Univeristy of Toronto, Dept. of computer Science, preprint.
Oja, E. (1982), ā A simplified neuron model as a principal component analyzerā, Journal of Mathematical Biology, 16, 267ā273.
Ormoneit D., Tresp V. (1996), āImproved Gaussian Mixture Density Estimates Using Bayesian Penalty Term and Network Averagingā, in Advances in Neural Information Processing Systems 8.
Rao, C. R., ā Estimation and tests of signifiance in factor analysisā, Psychometrika, 20, 93ā11, 1955.
Sanger, T. D. (1989), āOptimal unsupervised learning in a single layer linear feedforward networkā, Neural Networks, 2, 459ā473.
Tipping M. E., Bishop M. C, (06/1997), Mixtures of probabilistic principal components analyserā, Aston University NCRG Technical Report, http://neural-server.aston.ac.uk/cgi-bin
Xu I., Jordan M. I., (1996), āon converge properties of the EM algorithm for gaussian mixturesā, neural computation, 8,129ā151.
Xu, L. (1993), āLeast Mean Square Error Reconstruction Principle for Self-Organizing Neural Netsā, Neural Networks, 6, 627ā648.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
Ā© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bernard, L. (1998). Mixtures of principal components Gaussians for density estimation in high dimension data spaces. In: Amin, A., Dori, D., Pudil, P., Freeman, H. (eds) Advances in Pattern Recognition. SSPR /SPR 1998. Lecture Notes in Computer Science, vol 1451. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0033324
Download citation
DOI: https://doi.org/10.1007/BFb0033324
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64858-1
Online ISBN: 978-3-540-68526-5
eBook Packages: Springer Book Archive