LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Palmer-Brown, Dominic; Kang, Miao (2006)
Languages: English
Types: Unknown
Subjects:
Artificial neural network learning is typically accomplished via adaptation between neurons. This paper describes adaptation that is simultaneously between and within neurons. The conventional neurocomputing wisdom is that by adapting the pattern of connections between neurons the network can learn to respond differentially to classes of incoming patterns. The success of this approach in an age of massively increasing computing power that has made high speed neurocomputing feasible on the desktop and more recently in the palm of the hand, has resulted in little attention being paid to the implications of adaptation within the individual neurons. The computational assumption has tended to be that the internal neural mechanism is fixed. However, there are good computational and biological reasons for examining the internal neural mechanisms of learning. Recent neuroscience suggests that neuromodulators play a role in learning by modifying the neuron’s activation function [Scheler] and with an adaptive function approach it is possible to learn linearly inseparable problems fast, even without hidden nodes. The ADaptive FUction Neural Network (ADFUNN) presented in this paper is based on a linear piecewise neuron activation function that is modified by a novel gradient descent supervised learning algorithm [Palmer-Brown;Kang]. It has been applied to the Iris dataset, and a natural language phrase recognition problem, exhibiting impressive generalisation classification ability with no hidden neurons.
  • The results below are discovered through our pilot algorithms. Let us know how we are doing!

    • and Synaptic Plasticity. Progress Neurobiology, Vol.72, No 6. (2004)
  • No related research data.
  • No similar publications.

Share - Bookmark

Cite this article