Monday, July 26, 2010

Neural Networks

Neural Networks (NN), which is simplified models of the biological neuron system, is a massively parallel distributed processing system made up of highly interconnected neural computing elements that have the ability to learn and thereby acquire knowledge and make it available for use.
Various learning mechanisms exist to enable the NN acquire knowledge. NN architectures have been classified into various types based on their learning mechanisms. Ability of NN to learn is called ‘training’ and the ability of NN to solve a problem using the acquired knowledge is called ’inference’.
A human brain develops with time and it is generally known as ‘experience’. Technically, this involves the ‘development’ of neurons to adapt themselves to their surrounding environment, thus rendering the brain ‘plastic’ in its information processing capability. On similar lines, the property of plasticity is available with NN architectures. Further, the ‘stability’ of NN is also desired, i.e., the adaptive capability of the NN in the face of changing environment. This is so since NN systems essentially being learning systems need to preserve the information learnt, but at the same time, need to be receptive to leaning new information. The NN needs to remain ‘plastic’ to significant or useful information, but remain ‘stable’ when presented with irrelevant information. Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques.

No comments:

Post a Comment