LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Dybowski, Richard (1998)
Languages: English
Types: Article
Subjects:
The paper describes the use of radial basis function neural networks with Gaussian basis functions to classify incomplete feature vectors. The method uses the fact that any marginal distribution of a Gaussian distribution can be determined from the mean vector and covariance matrix of the joint distribution.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • Bishop, C.M. (1995). Neural Networks for Pattern Recognition. Clarendon Press, Oxford.
    • Broomhead, D.S. and D. Lowe (1988). Multivariable functional interpolation and adaptive networks. Complex Systems 2, 321-355.
    • Everitt, B.S. and D.J. Hand (1981). Finite Mixture Distributions. Chapman & Hall, London.
    • Dempster, A.P., N.M. Laird and D.B. Rubin (1977). Maximum likelihood from incomplete data via the EM algorithm (with discussion). Journal of the Royal Statistical Society B39, 1-38.
    • Friedman, J.H. and W. Stuetzle (1981). Projection pursuit regression. Journal of the American Statistical Association 76, 817-823.
    • Hartman, E.J., J.D. Keeler and J.M. Kowalski (1990). Layered neural networks with Gaussian hidden units as universal approximators. Neural Computation 2, 210-215.
    • Krzanowski, W.J. (1988). Principles of Multivariate Analysis. Oxford University Press, Oxford, p.206.
    • Little, R.J.A. and D.B. Rubin (1987). Statistical Analysis with Missing Data. John Wiley, New York.
    • Little, R.J.A. and D.B. Rubin (1990). The analysis of social science data with missing values. In: J. Fox and J.S. Long , Eds., Modern Methods of Data Analysis. Sage Publications, Newbury Park, CA, pp. 374-409.
    • McLachlan, G.J. and K.E. Basford (1988). Mixture Models: Inference and Applications to Clustering. Marcel Dekker, New York.
    • Moody, J. and C.J. Darken (1989). Fast learning in networks of locally-tuned processing units. Neural Computation 1, 281-294.
    • Parzen, E. (1961). On estimation of a probability density function and mode. Annals of Mathematical Statistics, 33, 1065-1076.
    • Poggio, T. and F. Girosi (1990). Regularization algorithms for learning that are equivalent to multilayer networks. Science 247, 978-982.
    • Redner, R.A. and H.F. Walker (1984). Mixture densities, maximum likelihood and the EM algorithm. SIAM Review 26, 195-239.
    • Silverman, B.W. (1986). Density Estimation for Statistics and Data Analysis. Chapman & Hall, London.
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article