Displaying all 4 publications

Abstract:
Sort:
  1. Goh WY, Lim CP, Peh KK
    IEEE Trans Neural Netw, 2003;14(2):459-63.
    PMID: 18238031 DOI: 10.1109/TNN.2003.809420
    Applicability of an ensemble of Elman networks with boosting to drug dissolution profile predictions is investigated. Modifications of AdaBoost that enables its use in regression tasks are explained. Two real data sets comprising in vitro dissolution profiles of matrix-controlled-release theophylline pellets are employed to assess the effectiveness of the proposed system. Statistical evaluation and comparison of the results are performed. This work positively demonstrates the potentials of the proposed system for predicting desired drug dissolution characteristics in pharmaceutical product formulation tasks.
  2. Palaniappan R, Raveendran P, Omatu S
    IEEE Trans Neural Netw, 2002;13(2):486-91.
    PMID: 18244450 DOI: 10.1109/72.991435
    In this letter, neural networks (NNs) classify alcoholics and nonalcoholics using features extracted from visual evoked potential (VEP). A genetic algorithm (GA) is used to select the minimum number of channels that maximize classification performance. GA population fitness is evaluated using fuzzy ARTMAP (FA) NN, instead of the widely used multilayer perceptron (MLP). MLP, despite its effective classification, requires long training time (on the order of 10(3) times compared to FA). This causes it to be unsuitable to be used with GA, especially for on-line training. It is shown empirically that the optimal channel configuration selected by the proposed method is unbiased, i.e., it is optimal not only for FA but also for MLP classification. Therefore, it is proposed that for future experiments, these optimal channels could be considered for applications that involve classification of alcoholics.
  3. Loo CK, Rajeswari M, Rao MV
    IEEE Trans Neural Netw, 2004 Nov;15(6):1378-95.
    PMID: 15565767
    This paper presents two novel approaches to determine optimum growing multi-experts network (GMN) structure. The first method called direct method deals with expertise domain and levels in connection with local experts. The growing neural gas (GNG) algorithm is used to cluster the local experts. The concept of error distribution is used to apportion error among the local experts. After reaching the specified size of the network, redundant experts removal algorithm is invoked to prune the size of the network based on the ranking of the experts. However, GMN is not ergonomic due to too many network control parameters. Therefore, a self-regulating GMN (SGMN) algorithm is proposed. SGMN adopts self-adaptive learning rates for gradient-descent learning rules. In addition, SGMN adopts a more rigorous clustering method called fully self-organized simplified adaptive resonance theory in a modified form. Experimental results show SGMN obtains comparative or even better performance than GMN in four benchmark examples, with reduced sensitivity to learning parameters setting. Moreover, both GMN and SGMN outperform the other neural networks and statistical models. The efficacy of SGMN is further justified in three industrial applications and a control problem. It provides consistent results besides holding out a profound potential and promise for building a novel type of nonlinear model consisting of several local linear models.
  4. Hasan SR, Siong NK
    IEEE Trans Neural Netw, 1997;8(2):424-36.
    PMID: 18255644
    In this paper emerging parallel/distributed architectures are explored for the digital VLSI implementation of adaptive bidirectional associative memory (BAM) neural network. A single instruction stream many data stream (SIMD)-based parallel processing architecture, is developed for the adaptive BAM neural network, taking advantage of the inherent parallelism in BAM. This novel neural processor architecture is named the sliding feeder BAM array processor (SLiFBAM). The SLiFBAM processor can be viewed as a two-stroke neural processing engine, It has four operating modes: learn pattern, evaluate pattern, read weight, and write weight. Design of a SLiFBAM VLSI processor chip is also described. By using 2-mum scalable CMOS technology, a SLiFBAM processor chip with 4+4 neurons and eight modules of 256x5 bit local weight-storage SRAM, was integrated on a 6.9x7.4 mm(2) prototype die. The system architecture is highly flexible and modular, enabling the construction of larger BAM networks of up to 252 neurons using multiple SLiFBAM chips.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links