IEEE Trans Neural Netw, 2004 Nov;15(6):1378-95.
PMID: 15565767

Abstract

This paper presents two novel approaches to determine optimum growing multi-experts network (GMN) structure. The first method called direct method deals with expertise domain and levels in connection with local experts. The growing neural gas (GNG) algorithm is used to cluster the local experts. The concept of error distribution is used to apportion error among the local experts. After reaching the specified size of the network, redundant experts removal algorithm is invoked to prune the size of the network based on the ranking of the experts. However, GMN is not ergonomic due to too many network control parameters. Therefore, a self-regulating GMN (SGMN) algorithm is proposed. SGMN adopts self-adaptive learning rates for gradient-descent learning rules. In addition, SGMN adopts a more rigorous clustering method called fully self-organized simplified adaptive resonance theory in a modified form. Experimental results show SGMN obtains comparative or even better performance than GMN in four benchmark examples, with reduced sensitivity to learning parameters setting. Moreover, both GMN and SGMN outperform the other neural networks and statistical models. The efficacy of SGMN is further justified in three industrial applications and a control problem. It provides consistent results besides holding out a profound potential and promise for building a novel type of nonlinear model consisting of several local linear models.

* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.