An Adaptive Back-Propagation Learning Method: A Preliminary Study for Incremental Neural Networks

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Almost all traditional learning methods of artificial neural networks focused on manipulating stationary training sets. In fact, this assumption is anti-biological and too strong to hinder many interesting applications. In this paper, we apply the concept of minimizing weight sensitivity cost and training square-error functions using gradient descent optimization techniques, and obtain a new supervised back-propagation learning algorithm on a biased 2-layered perceptron. In addition to illustrate the conflict locality of an inserted training instance with respect to previous training data, we point out this adaptive learning method can get a network with a measurable generalization ability. This work can also be extended to an incremental network which no training instances are needed to be remembered.

Original languageEnglish
Title of host publicationProceedings - 1992 International Joint Conference on Neural Networks, IJCNN 1992
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages713-718
Number of pages6
ISBN (Electronic)0780305590
DOIs
StatePublished - 1992
Externally publishedYes
Event1992 International Joint Conference on Neural Networks, IJCNN 1992 - Baltimore, United States
Duration: 07 06 199211 06 1992

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume1

Conference

Conference1992 International Joint Conference on Neural Networks, IJCNN 1992
Country/TerritoryUnited States
CityBaltimore
Period07/06/9211/06/92

Bibliographical note

Publisher Copyright:
© 1992 IEEE.

Fingerprint

Dive into the research topics of 'An Adaptive Back-Propagation Learning Method: A Preliminary Study for Incremental Neural Networks'. Together they form a unique fingerprint.

Cite this