Displaying all 5 publications

  1. Wi NT, Loo CK, Chockalingam L
    Int J Neural Syst, 2012 Dec;22(6):1250029.
    PMID: 23186278 DOI: 10.1142/S0129065712500293
    A small change in image will cause a dramatic change in signals. Visual system is required to be able to ignore these changes, yet specific enough to perform recognition. This work intends to provide biological-backed insights into 2D translation and scaling invariance and 3D pose-invariance without imposing strain on memory and with biological justification. The model can be divided into lower and higher visual stages. Lower visual stage models the visual pathway from retina to the striate cortex (V1), whereas the modeling of higher visual stage is mainly based on current psychophysical evidences.
  2. Haidar AM, Mohamed A, Al-Dabbagh M, Hussain A, Masoum M
    Int J Neural Syst, 2009 Dec;19(6):473-9.
    PMID: 20039470
    Load shedding is some of the essential requirement for maintaining security of modern power systems, particularly in competitive energy markets. This paper proposes an intelligent scheme for fast and accurate load shedding using neural networks for predicting the possible loss of load at the early stage and neuro-fuzzy for determining the amount of load shed in order to avoid a cascading outage. A large scale electrical power system has been considered to validate the performance of the proposed technique in determining the amount of load shed. The proposed techniques can provide tools for improving the reliability and continuity of power supply. This was confirmed by the results obtained in this research of which sample results are given in this paper.
  3. Gunasekaran S, Venkatesh B, Sagar BS
    Int J Neural Syst, 2004 Apr;14(2):139-45.
    PMID: 15112371
    Training methodology of the Back Propagation Network (BPN) is well documented. One aspect of BPN that requires investigation is whether or not the BPN would get trained for a given training data set and architecture. In this paper the behavior of the BPN is analyzed during its training phase considering convergent and divergent training data sets. Evolution of the weights during the training phase was monitored for the purpose of analysis. The evolution of weights was plotted as return map and was characterized by means of fractal dimension. This fractal dimensional analysis of the weight evolution trajectories is used to provide a new insight to understand the behavior of BPN and dynamics in the evolution of weights.
  4. Dawood F, Loo CK
    Int J Neural Syst, 2018 May;28(4):1750038.
    PMID: 29022403 DOI: 10.1142/S0129065717500381
    Imitation learning through self-exploration is essential in developing sensorimotor skills. Most developmental theories emphasize that social interactions, especially understanding of observed actions, could be first achieved through imitation, yet the discussion on the origin of primitive imitative abilities is often neglected, referring instead to the possibility of its innateness. This paper presents a developmental model of imitation learning based on the hypothesis that humanoid robot acquires imitative abilities as induced by sensorimotor associative learning through self-exploration. In designing such learning system, several key issues will be addressed: automatic segmentation of the observed actions into motion primitives using raw images acquired from the camera without requiring any kinematic model; incremental learning of spatio-temporal motion sequences to dynamically generates a topological structure in a self-stabilizing manner; organization of the learned data for easy and efficient retrieval using a dynamic associative memory; and utilizing segmented motion primitives to generate complex behavior by the combining these motion primitives. In our experiment, the self-posture is acquired through observing the image of its own body posture while performing the action in front of a mirror through body babbling. The complete architecture was evaluated by simulation and real robot experiments performed on DARwIn-OP humanoid robot.
  5. Masuyama N, Loo CK, Wermter S
    Int J Neural Syst, 2019 Jun;29(5):1850052.
    PMID: 30764724 DOI: 10.1142/S0129065718500521
    This paper attempts to solve the typical problems of self-organizing growing network models, i.e. (a) an influence of the order of input data on the self-organizing ability, (b) an instability to high-dimensional data and an excessive sensitivity to noise, and (c) an expensive computational cost by integrating Kernel Bayes Rule (KBR) and Correntropy-Induced Metric (CIM) into Adaptive Resonance Theory (ART) framework. KBR performs a covariance-free Bayesian computation which is able to maintain a fast and stable computation. CIM is a generalized similarity measurement which can maintain a high-noise reduction ability even in a high-dimensional space. In addition, a Growing Neural Gas (GNG)-based topology construction process is integrated into the ART framework to enhance its self-organizing ability. The simulation experiments with synthetic and real-world datasets show that the proposed model has an outstanding stable self-organizing ability for various test environments.
Related Terms
Contact Us

Please provide feedback to Administrator (tengcl@gmail.com)

External Links