Displaying publications 201 - 220 of 1462 in total

Abstract:
Sort:
  1. Che Hasan R, Ierodiaconou D, Laurenson L, Schimel A
    PLoS One, 2014;9(5):e97339.
    PMID: 24824155 DOI: 10.1371/journal.pone.0097339
    Multibeam echosounders (MBES) are increasingly becoming the tool of choice for marine habitat mapping applications. In turn, the rapid expansion of habitat mapping studies has resulted in a need for automated classification techniques to efficiently map benthic habitats, assess confidence in model outputs, and evaluate the importance of variables driving the patterns observed. The benthic habitat characterisation process often involves the analysis of MBES bathymetry, backscatter mosaic or angular response with observation data providing ground truth. However, studies that make use of the full range of MBES outputs within a single classification process are limited. We present an approach that integrates backscatter angular response with MBES bathymetry, backscatter mosaic and their derivatives in a classification process using a Random Forests (RF) machine-learning algorithm to predict the distribution of benthic biological habitats. This approach includes a method of deriving statistical features from backscatter angular response curves created from MBES data collated within homogeneous regions of a backscatter mosaic. Using the RF algorithm we assess the relative importance of each variable in order to optimise the classification process and simplify models applied. The results showed that the inclusion of the angular response features in the classification process improved the accuracy of the final habitat maps from 88.5% to 93.6%. The RF algorithm identified bathymetry and the angular response mean as the two most important predictors. However, the highest classification rates were only obtained after incorporating additional features derived from bathymetry and the backscatter mosaic. The angular response features were found to be more important to the classification process compared to the backscatter mosaic features. This analysis indicates that integrating angular response information with bathymetry and the backscatter mosaic, along with their derivatives, constitutes an important improvement for studying the distribution of benthic habitats, which is necessary for effective marine spatial planning and resource management.
    Matched MeSH terms: Algorithms*
  2. Alkhasawneh MSh, Ngah UK, Tay LT, Mat Isa NA, Al-batah MS
    ScientificWorldJournal, 2013;2013:415023.
    PMID: 24453846 DOI: 10.1155/2013/415023
    Landslide is one of the natural disasters that occur in Malaysia. Topographic factors such as elevation, slope angle, slope aspect, general curvature, plan curvature, and profile curvature are considered as the main causes of landslides. In order to determine the dominant topographic factors in landslide mapping analysis, a study was conducted and presented in this paper. There are three main stages involved in this study. The first stage is the extraction of extra topographic factors. Previous landslide studies had identified mainly six topographic factors. Seven new additional factors have been proposed in this study. They are longitude curvature, tangential curvature, cross section curvature, surface area, diagonal line length, surface roughness, and rugosity. The second stage is the specification of the weight of each factor using two methods. The methods are multilayer perceptron (MLP) network classification accuracy and Zhou's algorithm. At the third stage, the factors with higher weights were used to improve the MLP performance. Out of the thirteen factors, eight factors were considered as important factors, which are surface area, longitude curvature, diagonal length, slope angle, elevation, slope aspect, rugosity, and profile curvature. The classification accuracy of multilayer perceptron neural network has increased by 3% after the elimination of five less important factors.
    Matched MeSH terms: Algorithms*
  3. Lim KS, Ibrahim Z, Buyamin S, Ahmad A, Naim F, Ghazali KH, et al.
    ScientificWorldJournal, 2013;2013:510763.
    PMID: 23737718 DOI: 10.1155/2013/510763
    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.
    Matched MeSH terms: Algorithms*
  4. Hamadneh N, Khan WA, Sathasivam S, Ong HC
    PLoS One, 2013;8(5):e66080.
    PMID: 23741525 DOI: 10.1371/journal.pone.0066080
    Particle swarm optimization (PSO) is employed to investigate the overall performance of a pin fin.The following study will examine the effect of governing parameters on overall thermal/fluid performance associated with different fin geometries, including, rectangular plate fins as well as square, circular, and elliptical pin fins. The idea of entropy generation minimization, EGM is employed to combine the effects of thermal resistance and pressure drop within the heat sink. A general dimensionless expression for the entropy generation rate is obtained by considering a control volume around the pin fin including base plate and applying the conservations equations for mass and energy with the entropy balance. Selected fin geometries are examined for the heat transfer, fluid friction, and the minimum entropy generation rate corresponding to different parameters including axis ratio, aspect ratio, and Reynolds number. The results clearly indicate that the preferred fin profile is very dependent on these parameters.
    Matched MeSH terms: Algorithms*
  5. Naher H, Abdullah FA, Akbar MA
    PLoS One, 2013;8(5):e64618.
    PMID: 23741355 DOI: 10.1371/journal.pone.0064618
    The generalized and improved (G'/G)-expansion method is a powerful and advantageous mathematical tool for establishing abundant new traveling wave solutions of nonlinear partial differential equations. In this article, we investigate the higher dimensional nonlinear evolution equation, namely, the (3+1)-dimensional modified KdV-Zakharov-Kuznetsev equation via this powerful method. The solutions are found in hyperbolic, trigonometric and rational function form involving more parameters and some of our constructed solutions are identical with results obtained by other authors if certain parameters take special values and some are new. The numerical results described in the figures were obtained with the aid of commercial software Maple.
    Matched MeSH terms: Algorithms*
  6. Abed SA, Tiun S, Omar N
    PLoS One, 2015;10(9):e0136614.
    PMID: 26422368 DOI: 10.1371/journal.pone.0136614
    Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used.
    Matched MeSH terms: Algorithms*
  7. Idbeaa T, Abdul Samad S, Husain H
    PLoS One, 2016;11(3):e0150732.
    PMID: 26963093 DOI: 10.1371/journal.pone.0150732
    This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.
    Matched MeSH terms: Algorithms*
  8. Chin JH, Ratnavelu K
    PLoS One, 2016;11(5):e0155320.
    PMID: 27176470 DOI: 10.1371/journal.pone.0155320
    Community structure is considered one of the most interesting features in complex networks. Many real-world complex systems exhibit community structure, where individuals with similar properties form a community. The identification of communities in a network is important for understanding the structure of said network, in a specific perspective. Thus, community detection in complex networks gained immense interest over the last decade. A lot of community detection methods were proposed, and one of them is the label propagation algorithm (LPA). The simplicity and time efficiency of the LPA make it a popular community detection method. However, the LPA suffers from instability detection due to randomness that is induced in the algorithm. The focus of this paper is to improve the stability and accuracy of the LPA, while retaining its simplicity. Our proposed algorithm will first detect the main communities in a network by using the number of mutual neighbouring nodes. Subsequently, nodes are added into communities by using a constrained LPA. Those constraints are then gradually relaxed until all nodes are assigned into groups. In order to refine the quality of the detected communities, nodes in communities can be switched to another community or removed from their current communities at various stages of the algorithm. We evaluated our algorithm on three types of benchmark networks, namely the Lancichinetti-Fortunato-Radicchi (LFR), Relaxed Caveman (RC) and Girvan-Newman (GN) benchmarks. We also apply the present algorithm to some real-world networks of various sizes. The current results show some promising potential, of the proposed algorithm, in terms of detecting communities accurately. Furthermore, our constrained LPA has a robustness and stability that are significantly better than the simple LPA as it is able to yield deterministic results.
    Matched MeSH terms: Algorithms*
  9. Albatsh FM, Ahmad S, Mekhilef S, Mokhlis H, Hassan MA
    PLoS One, 2015;10(4):e0123802.
    PMID: 25874560 DOI: 10.1371/journal.pone.0123802
    This study examines a new approach to selecting the locations of unified power flow controllers (UPFCs) in power system networks based on a dynamic analysis of voltage stability. Power system voltage stability indices (VSIs) including the line stability index (LQP), the voltage collapse proximity indicator (VCPI), and the line stability index (Lmn) are employed to identify the most suitable locations in the system for UPFCs. In this study, the locations of the UPFCs are identified by dynamically varying the loads across all of the load buses to represent actual power system conditions. Simulations were conducted in a power system computer-aided design (PSCAD) software using the IEEE 14-bus and 39- bus benchmark power system models. The simulation results demonstrate the effectiveness of the proposed method. When the UPFCs are placed in the locations obtained with the new approach, the voltage stability improves. A comparison of the steady-state VSIs resulting from the UPFCs placed in the locations obtained with the new approach and with particle swarm optimization (PSO) and differential evolution (DE), which are static methods, is presented. In all cases, the UPFC locations given by the proposed approach result in better voltage stability than those obtained with the other approaches.
    Matched MeSH terms: Algorithms*
  10. Daud KM, Mohamad MS, Zakaria Z, Hassan R, Shah ZA, Deris S, et al.
    Comput Biol Med, 2019 10;113:103390.
    PMID: 31450056 DOI: 10.1016/j.compbiomed.2019.103390
    Metabolic engineering is defined as improving the cellular activities of an organism by manipulating the metabolic, signal or regulatory network. In silico reaction knockout simulation is one of the techniques applied to analyse the effects of genetic perturbations on metabolite production. Many methods consider growth coupling as the objective function, whereby it searches for mutants that maximise the growth and production rate. However, the final goal is to increase the production rate. Furthermore, they produce one single solution, though in reality, cells do not focus on one objective and they need to consider various different competing objectives. In this work, a method, termed ndsDSAFBA (non-dominated sorting Differential Search Algorithm and Flux Balance Analysis), has been developed to find the reaction knockouts involved in maximising the production rate and growth rate of the mutant, by incorporating Pareto dominance concepts. The proposed ndsDSAFBA method was validated using three genome-scale metabolic models. We obtained a set of non-dominated solutions, with each solution representing a different mutant strain. The results obtained were compared with the single objective optimisation (SOO) and multi-objective optimisation (MOO) methods. The results demonstrate that ndsDSAFBA is better than the other methods in terms of production rate and growth rate.
    Matched MeSH terms: Algorithms*
  11. Al-Saiagh W, Tiun S, Al-Saffar A, Awang S, Al-Khaleefa AS
    PLoS One, 2018;13(12):e0208695.
    PMID: 30571777 DOI: 10.1371/journal.pone.0208695
    Word sense disambiguation (WSD) is the process of identifying an appropriate sense for an ambiguous word. With the complexity of human languages in which a single word could yield different meanings, WSD has been utilized by several domains of interests such as search engines and machine translations. The literature shows a vast number of techniques used for the process of WSD. Recently, researchers have focused on the use of meta-heuristic approaches to identify the best solutions that reflect the best sense. However, the application of meta-heuristic approaches remains limited and thus requires the efficient exploration and exploitation of the problem space. Hence, the current study aims to propose a hybrid meta-heuristic method that consists of particle swarm optimization (PSO) and simulated annealing to find the global best meaning of a given text. Different semantic measures have been utilized in this model as objective functions for the proposed hybrid PSO. These measures consist of JCN and extended Lesk methods, which are combined effectively in this work. The proposed method is tested using a three-benchmark dataset (SemCor 3.0, SensEval-2, and SensEval-3). Results show that the proposed method has superior performance in comparison with state-of-the-art approaches.
    Matched MeSH terms: Algorithms*
  12. Gulzari UA, Khan S, Sajid M, Anjum S, Torres FS, Sarjoughian H, et al.
    PLoS One, 2019;14(10):e0222759.
    PMID: 31577809 DOI: 10.1371/journal.pone.0222759
    This paper presents the Hybrid Scalable-Minimized-Butterfly-Fat-Tree (H-SMBFT) topology for on-chip communication. Main aspects of this work are the description of the architectural design and the characteristics as well as a comparative analysis against two established indirect topologies namely Butterfly-Fat-Tree (BFT) and Scalable-Minimized-Butterfly-Fat-Tree (SMBFT). Simulation results demonstrate that the proposed topology outperforms its predecessors in terms of performance, area and power dissipation. Specifically, it improves the link interconnectivity between routing levels, such that the number of required links isreduced. This results into reduced router complexity and shortened routing paths between any pair of communicating nodes in the network. Moreover, simulation results under synthetic as well as real-world embedded applications workloads reveal that H-SMBFT can reduce the average latency by up-to35.63% and 17.36% compared to BFT and SMBFT, respectively. In addition, the power dissipation of the network can be reduced by up-to33.82% and 19.45%, while energy consumption can be improved byup-to32.91% and 16.83% compared to BFT and SMBFT, respectively.
    Matched MeSH terms: Algorithms*
  13. Hoque MS, Jamil N, Amin N, Lam KY
    Sensors (Basel), 2021 Jun 20;21(12).
    PMID: 34202977 DOI: 10.3390/s21124220
    Successful cyber-attacks are caused by the exploitation of some vulnerabilities in the software and/or hardware that exist in systems deployed in premises or the cloud. Although hundreds of vulnerabilities are discovered every year, only a small fraction of them actually become exploited, thereby there exists a severe class imbalance between the number of exploited and non-exploited vulnerabilities. The open source national vulnerability database, the largest repository to index and maintain all known vulnerabilities, assigns a unique identifier to each vulnerability. Each registered vulnerability also gets a severity score based on the impact it might inflict upon if compromised. Recent research works showed that the cvss score is not the only factor to select a vulnerability for exploitation, and other attributes in the national vulnerability database can be effectively utilized as predictive feature to predict the most exploitable vulnerabilities. Since cybersecurity management is highly resource savvy, organizations such as cloud systems will benefit when the most likely exploitable vulnerabilities that exist in their system software or hardware can be predicted with as much accuracy and reliability as possible, to best utilize the available resources to fix those first. Various existing research works have developed vulnerability exploitation prediction models by addressing the existing class imbalance based on algorithmic and artificial data resampling techniques but still suffer greatly from the overfitting problem to the major class rendering them practically unreliable. In this research, we have designed a novel cost function feature to address the existing class imbalance. We also have utilized the available large text corpus in the extracted dataset to develop a custom-trained word vector that can better capture the context of the local text data for utilization as an embedded layer in neural networks. Our developed vulnerability exploitation prediction models powered by a novel cost function and custom-trained word vector have achieved very high overall performance metrics for accuracy, precision, recall, F1-Score and AUC score with values of 0.92, 0.89, 0.98, 0.94 and 0.97, respectively, thereby outperforming any existing models while successfully overcoming the existing overfitting problem for class imbalance.
    Matched MeSH terms: Algorithms*
  14. El-Badawy IM, Singh OP, Omar Z
    Technol Health Care, 2021;29(1):59-72.
    PMID: 32716337 DOI: 10.3233/THC-202198
    BACKGROUND: The quantitative features of a capnogram signal are important clinical metrics in assessing pulmonary function. However, these features should be quantified from the regular (artefact-free) segments of the capnogram waveform.

    OBJECTIVE: This paper presents a machine learning-based approach for the automatic classification of regular and irregular capnogram segments.

    METHODS: Herein, we proposed four time- and two frequency-domain features experimented with the support vector machine classifier through ten-fold cross-validation. MATLAB simulation was conducted on 100 regular and 100 irregular 15 s capnogram segments. Analysis of variance was performed to investigate the significance of the proposed features. Pearson's correlation was utilized to select the relatively most substantial ones, namely variance and the area under normalized magnitude spectrum. Classification performance, using these features, was evaluated against two feature sets in which either time- or frequency-domain features only were employed.

    RESULTS: Results showed a classification accuracy of 86.5%, which outperformed the other cases by an average of 5.5%. The achieved specificity, sensitivity, and precision were 84%, 89% and 86.51%, respectively. The average execution time for feature extraction and classification per segment is only 36 ms.

    CONCLUSION: The proposed approach can be integrated with capnography devices for real-time capnogram-based respiratory assessment. However, further research is recommended to enhance the classification performance.

    Matched MeSH terms: Algorithms*
  15. Adnan AI, Hanapi ZM, Othman M, Zukarnain ZA
    PLoS One, 2017;12(1):e0170273.
    PMID: 28121992 DOI: 10.1371/journal.pone.0170273
    Due to the lack of dependency for routing initiation and an inadequate allocated sextant on responding messages, the secure geographic routing protocols for Wireless Sensor Networks (WSNs) have attracted considerable attention. However, the existing protocols are more likely to drop packets when legitimate nodes fail to respond to the routing initiation messages while attackers in the allocated sextant manage to respond. Furthermore, these protocols are designed with inefficient collection window and inadequate verification criteria which may lead to a high number of attacker selections. To prevent the failure to find an appropriate relay node and undesirable packet retransmission, this paper presents Secure Region-Based Geographic Routing Protocol (SRBGR) to increase the probability of selecting the appropriate relay node. By extending the allocated sextant and applying different message contention priorities more legitimate nodes can be admitted in the routing process. Moreover, the paper also proposed the bound collection window for a sufficient collection time and verification cost for both attacker identification and isolation. Extensive simulation experiments have been performed to evaluate the performance of the proposed protocol in comparison with other existing protocols. The results demonstrate that SRBGR increases network performance in terms of the packet delivery ratio and isolates attacks such as Sybil and Black hole.
    Matched MeSH terms: Algorithms*
  16. Madni SHH, Abd Latiff MS, Abdullahi M, Abdulhamid SM, Usman MJ
    PLoS One, 2017;12(5):e0176321.
    PMID: 28467505 DOI: 10.1371/journal.pone.0176321
    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
    Matched MeSH terms: Algorithms*
  17. Nasser AB, Zamli KZ, Alsewari AA, Ahmed BS
    PLoS One, 2018;13(5):e0195187.
    PMID: 29718918 DOI: 10.1371/journal.pone.0195187
    The application of meta-heuristic algorithms for t-way testing has recently become prevalent. Consequently, many useful meta-heuristic algorithms have been developed on the basis of the implementation of t-way strategies (where t indicates the interaction strength). Mixed results have been reported in the literature to highlight the fact that no single strategy appears to be superior compared with other configurations. The hybridization of two or more algorithms can enhance the overall search capabilities, that is, by compensating the limitation of one algorithm with the strength of others. Thus, hybrid variants of the flower pollination algorithm (FPA) are proposed in the current work. Four hybrid variants of FPA are considered by combining FPA with other algorithmic components. The experimental results demonstrate that FPA hybrids overcome the problems of slow convergence in the original FPA and offers statistically superior performance compared with existing t-way strategies in terms of test suite size.
    Matched MeSH terms: Algorithms*
  18. Alhaj TA, Siraj MM, Zainal A, Elshoush HT, Elhaj F
    PLoS One, 2016;11(11):e0166017.
    PMID: 27893821 DOI: 10.1371/journal.pone.0166017
    Grouping and clustering alerts for intrusion detection based on the similarity of features is referred to as structurally base alert correlation and can discover a list of attack steps. Previous researchers selected different features and data sources manually based on their knowledge and experience, which lead to the less accurate identification of attack steps and inconsistent performance of clustering accuracy. Furthermore, the existing alert correlation systems deal with a huge amount of data that contains null values, incomplete information, and irrelevant features causing the analysis of the alerts to be tedious, time-consuming and error-prone. Therefore, this paper focuses on selecting accurate and significant features of alerts that are appropriate to represent the attack steps, thus, enhancing the structural-based alert correlation model. A two-tier feature selection method is proposed to obtain the significant features. The first tier aims at ranking the subset of features based on high information gain entropy in decreasing order. The‏ second tier extends additional features with a better discriminative ability than the initially ranked features. Performance analysis results show the significance of the selected features in terms of the clustering accuracy using 2000 DARPA intrusion detection scenario-specific dataset.
    Matched MeSH terms: Algorithms*
  19. Shareef H, Mutlag AH, Mohamed A
    Comput Intell Neurosci, 2017;2017:1673864.
    PMID: 28702051 DOI: 10.1155/2017/1673864
    Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland-Altman test, with more than 95 percent acceptability.
    Matched MeSH terms: Algorithms*
  20. Masuyama N, Loo CK, Wermter S
    Int J Neural Syst, 2019 Jun;29(5):1850052.
    PMID: 30764724 DOI: 10.1142/S0129065718500521
    This paper attempts to solve the typical problems of self-organizing growing network models, i.e. (a) an influence of the order of input data on the self-organizing ability, (b) an instability to high-dimensional data and an excessive sensitivity to noise, and (c) an expensive computational cost by integrating Kernel Bayes Rule (KBR) and Correntropy-Induced Metric (CIM) into Adaptive Resonance Theory (ART) framework. KBR performs a covariance-free Bayesian computation which is able to maintain a fast and stable computation. CIM is a generalized similarity measurement which can maintain a high-noise reduction ability even in a high-dimensional space. In addition, a Growing Neural Gas (GNG)-based topology construction process is integrated into the ART framework to enhance its self-organizing ability. The simulation experiments with synthetic and real-world datasets show that the proposed model has an outstanding stable self-organizing ability for various test environments.
    Matched MeSH terms: Algorithms*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links