Displaying publications 141 - 160 of 1460 in total

Abstract:
Sort:
  1. Shaddad RQ, Mohammad AB, Al-Gailani SA, Al-Hetar AM
    ScientificWorldJournal, 2014;2014:170471.
    PMID: 24772009 DOI: 10.1155/2014/170471
    The optical fiber is well adapted to pass multiple wireless signals having different carrier frequencies by using radio-over-fiber (ROF) technique. However, multiple wireless signals which have the same carrier frequency cannot propagate over a single optical fiber, such as wireless multi-input multi-output (MIMO) signals feeding multiple antennas in the fiber wireless (FiWi) system. A novel optical frequency upconversion (OFU) technique is proposed to solve this problem. In this paper, the novel OFU approach is used to transmit three wireless MIMO signals over a 20 km standard single mode fiber (SMF). The OFU technique exploits one optical source to produce multiple wavelengths by delivering it to a LiNbO3 external optical modulator. The wireless MIMO signals are then modulated by LiNbO3 optical intensity modulators separately using the generated optical carriers from the OFU process. These modulators use the optical single-sideband with carrier (OSSB+C) modulation scheme to optimize the system performance against the fiber dispersion effect. Each wireless MIMO signal is with a 2.4 GHz or 5 GHz carrier frequency, 1 Gb/s data rate, and 16-quadrature amplitude modulation (QAM). The crosstalk between the wireless MIMO signals is highly suppressed, since each wireless MIMO signal is carried on a specific optical wavelength.
    Matched MeSH terms: Algorithms
  2. Al-Saiagh W, Tiun S, Al-Saffar A, Awang S, Al-Khaleefa AS
    PLoS One, 2018;13(12):e0208695.
    PMID: 30571777 DOI: 10.1371/journal.pone.0208695
    Word sense disambiguation (WSD) is the process of identifying an appropriate sense for an ambiguous word. With the complexity of human languages in which a single word could yield different meanings, WSD has been utilized by several domains of interests such as search engines and machine translations. The literature shows a vast number of techniques used for the process of WSD. Recently, researchers have focused on the use of meta-heuristic approaches to identify the best solutions that reflect the best sense. However, the application of meta-heuristic approaches remains limited and thus requires the efficient exploration and exploitation of the problem space. Hence, the current study aims to propose a hybrid meta-heuristic method that consists of particle swarm optimization (PSO) and simulated annealing to find the global best meaning of a given text. Different semantic measures have been utilized in this model as objective functions for the proposed hybrid PSO. These measures consist of JCN and extended Lesk methods, which are combined effectively in this work. The proposed method is tested using a three-benchmark dataset (SemCor 3.0, SensEval-2, and SensEval-3). Results show that the proposed method has superior performance in comparison with state-of-the-art approaches.
    Matched MeSH terms: Algorithms*
  3. Kumar V, Kumar S, AlShboul R, Aggarwal G, Kaiwartya O, Khasawneh AM, et al.
    Sensors (Basel), 2021 Jun 08;21(12).
    PMID: 34201100 DOI: 10.3390/s21123948
    Recently, green computing has received significant attention for Internet of Things (IoT) environments due to the growing computing demands under tiny sensor enabled smart services. The related literature on green computing majorly focuses on a cover set approach that works efficiently for target coverage, but it is not applicable in case of area coverage. In this paper, we present a new variant of a cover set approach called a grouping and sponsoring aware IoT framework (GS-IoT) that is suitable for area coverage. We achieve non-overlapping coverage for an entire sensing region employing sectorial sensing. Non-overlapping coverage not only guarantees a sufficiently good coverage in case of large number of sensors deployed randomly, but also maximizes the life span of the whole network with appropriate scheduling of sensors. A deployment model for distribution of sensors is developed to ensure a minimum threshold density of sensors in the sensing region. In particular, a fast converging grouping (FCG) algorithm is developed to group sensors in order to ensure minimal overlapping. A sponsoring aware sectorial coverage (SSC) algorithm is developed to set off redundant sensors and to balance the overall network energy consumption. GS-IoT framework effectively combines both the algorithms for smart services. The simulation experimental results attest to the benefit of the proposed framework as compared to the state-of-the-art techniques in terms of various metrics for smart IoT environments including rate of overlapping, response time, coverage, active sensors, and life span of the overall network.
    Matched MeSH terms: Algorithms
  4. Hui KH, Ooi CS, Lim MH, Leong MS, Al-Obaidi SM
    PLoS One, 2017;12(12):e0189143.
    PMID: 29261689 DOI: 10.1371/journal.pone.0189143
    A major issue of machinery fault diagnosis using vibration signals is that it is over-reliant on personnel knowledge and experience in interpreting the signal. Thus, machine learning has been adapted for machinery fault diagnosis. The quantity and quality of the input features, however, influence the fault classification performance. Feature selection plays a vital role in selecting the most representative feature subset for the machine learning algorithm. In contrast, the trade-off relationship between capability when selecting the best feature subset and computational effort is inevitable in the wrapper-based feature selection (WFS) method. This paper proposes an improved WFS technique before integration with a support vector machine (SVM) model classifier as a complete fault diagnosis system for a rolling element bearing case study. The bearing vibration dataset made available by the Case Western Reserve University Bearing Data Centre was executed using the proposed WFS and its performance has been analysed and discussed. The results reveal that the proposed WFS secures the best feature subset with a lower computational effort by eliminating the redundancy of re-evaluation. The proposed WFS has therefore been found to be capable and efficient to carry out feature selection tasks.
    Matched MeSH terms: Algorithms
  5. Yau KL, Poh GS, Chien SF, Al-Rawi HA
    ScientificWorldJournal, 2014;2014:209810.
    PMID: 24995352 DOI: 10.1155/2014/209810
    Cognitive radio (CR) enables unlicensed users to exploit the underutilized spectrum in licensed spectrum whilst minimizing interference to licensed users. Reinforcement learning (RL), which is an artificial intelligence approach, has been applied to enable each unlicensed user to observe and carry out optimal actions for performance enhancement in a wide range of schemes in CR, such as dynamic channel selection and channel sensing. This paper presents new discussions of RL in the context of CR networks. It provides an extensive review on how most schemes have been approached using the traditional and enhanced RL algorithms through state, action, and reward representations. Examples of the enhancements on RL, which do not appear in the traditional RL approach, are rules and cooperative learning. This paper also reviews performance enhancements brought about by the RL algorithms and open issues. This paper aims to establish a foundation in order to spark new research interests in this area. Our discussion has been presented in a tutorial manner so that it is comprehensive to readers outside the specialty of RL and CR.
    Matched MeSH terms: Algorithms*
  6. Hussain S, Mustafa MW, Al-Shqeerat KHA, Saeed F, Al-Rimy BAS
    Sensors (Basel), 2021 Dec 17;21(24).
    PMID: 34960516 DOI: 10.3390/s21248423
    This study presents a novel feature-engineered-natural gradient descent ensemble-boosting (NGBoost) machine-learning framework for detecting fraud in power consumption data. The proposed framework was sequentially executed in three stages: data pre-processing, feature engineering, and model evaluation. It utilized the random forest algorithm-based imputation technique initially to impute the missing data entries in the acquired smart meter dataset. In the second phase, the majority weighted minority oversampling technique (MWMOTE) algorithm was used to avoid an unequal distribution of data samples among different classes. The time-series feature-extraction library and whale optimization algorithm were utilized to extract and select the most relevant features from the kWh reading of consumers. Once the most relevant features were acquired, the model training and testing process was initiated by using the NGBoost algorithm to classify the consumers into two distinct categories ("Healthy" and "Theft"). Finally, each input feature's impact (positive or negative) in predicting the target variable was recognized with the tree SHAP additive-explanations algorithm. The proposed framework achieved an accuracy of 93%, recall of 91%, and precision of 95%, which was greater than all the competing models, and thus validated its efficacy and significance in the studied field of research.
    Matched MeSH terms: Algorithms*
  7. Hag A, Handayani D, Altalhi M, Pillai T, Mantoro T, Kit MH, et al.
    Sensors (Basel), 2021 Dec 15;21(24).
    PMID: 34960469 DOI: 10.3390/s21248370
    In real-life applications, electroencephalogram (EEG) signals for mental stress recognition require a conventional wearable device. This, in turn, requires an efficient number of EEG channels and an optimal feature set. This study aims to identify an optimal feature subset that can discriminate mental stress states while enhancing the overall classification performance. We extracted multi-domain features within the time domain, frequency domain, time-frequency domain, and network connectivity features to form a prominent feature vector space for stress. We then proposed a hybrid feature selection (FS) method using minimum redundancy maximum relevance with particle swarm optimization and support vector machines (mRMR-PSO-SVM) to select the optimal feature subset. The performance of the proposed method is evaluated and verified using four datasets, namely EDMSS, DEAP, SEED, and EDPMSC. To further consolidate, the effectiveness of the proposed method is compared with that of the state-of-the-art metaheuristic methods. The proposed model significantly reduced the features vector space by an average of 70% compared with the state-of-the-art methods while significantly increasing overall detection performance.
    Matched MeSH terms: Algorithms*
  8. Al-Zuhair S
    Biotechnol Prog, 2005 Sep-Oct;21(5):1442-8.
    PMID: 16209548
    Kinetics of production of biodiesel by enzymatic methanolysis of vegetable oils using lipase has been investigated. A mathematical model taking into account the mechanism of the methanolysis reaction starting from the vegetable oil as substrate, rather than the free fatty acids, has been developed. The kinetic parameters were estimated by fitting the experimental data of the enzymatic reaction of sunflower oil by two types of lipases, namely, Rhizomucor miehei lipase (RM) immobilized on ion-exchange resins and Thermomyces lanuginosa lipase (TL) immobilized on silica gel. There was a good agreement between the experimental results of the initial rate of reaction and those predicted by the proposed model equations, for both enzymes. From the proposed model equations, the regions where the effect of alcohol inhibition fades, at different substrate concentrations, were identified. The proposed model equation can be used to predict the rate of methanolysis of vegetable oils in a batch or a continuous reactor and to determine the optimal conditions for biodiesel production.
    Matched MeSH terms: Algorithms
  9. Alkhasawneh MSh, Ngah UK, Tay LT, Mat Isa NA, Al-batah MS
    ScientificWorldJournal, 2013;2013:415023.
    PMID: 24453846 DOI: 10.1155/2013/415023
    Landslide is one of the natural disasters that occur in Malaysia. Topographic factors such as elevation, slope angle, slope aspect, general curvature, plan curvature, and profile curvature are considered as the main causes of landslides. In order to determine the dominant topographic factors in landslide mapping analysis, a study was conducted and presented in this paper. There are three main stages involved in this study. The first stage is the extraction of extra topographic factors. Previous landslide studies had identified mainly six topographic factors. Seven new additional factors have been proposed in this study. They are longitude curvature, tangential curvature, cross section curvature, surface area, diagonal line length, surface roughness, and rugosity. The second stage is the specification of the weight of each factor using two methods. The methods are multilayer perceptron (MLP) network classification accuracy and Zhou's algorithm. At the third stage, the factors with higher weights were used to improve the MLP performance. Out of the thirteen factors, eight factors were considered as important factors, which are surface area, longitude curvature, diagonal length, slope angle, elevation, slope aspect, rugosity, and profile curvature. The classification accuracy of multilayer perceptron neural network has increased by 3% after the elimination of five less important factors.
    Matched MeSH terms: Algorithms*
  10. Namazi H, Aghasian E, Ala TS
    Technol Health Care, 2020;28(1):57-66.
    PMID: 31104032 DOI: 10.3233/THC-181579
    Analysis of human brain activity is an important topic in human neuroscience. Human brain activity can be studied by analyzing the electroencephalography (EEG) signal. In this way, scientists have employed several techniques that investigate nonlinear dynamics of EEG signals. Fractal theory as a promising technique has shown its capabilities to analyze the nonlinear dynamics of time series. Since EEG signals have fractal patterns, in this research we analyze the variations of fractal dynamics of EEG signals between four datasets that were collected from healthy subjects with open-eyes and close-eyes conditions, patients with epilepsy who did and patients who did not face seizures. The obtained results showed that EEG signal during seizure has greatest complexity and the EEG signal during the seizure-free interval has lowest complexity. In order to verify the obtained results in case of fractal analysis, we employ approximate entropy, which indicates the randomness of time series. The obtained results in case of approximate entropy certified the fractal analysis results. The obtained results in this research show the effectiveness of fractal theory to investigate the nonlinear structure of EEG signal between different conditions.
    Matched MeSH terms: Algorithms
  11. Farook TH, Jamayet NB, Abdullah JY, Alam MK
    Pain Res Manag, 2021;2021:6659133.
    PMID: 33986900 DOI: 10.1155/2021/6659133
    Purpose: The study explored the clinical influence, effectiveness, limitations, and human comparison outcomes of machine learning in diagnosing (1) dental diseases, (2) periodontal diseases, (3) trauma and neuralgias, (4) cysts and tumors, (5) glandular disorders, and (6) bone and temporomandibular joint as possible causes of dental and orofacial pain.

    Method: Scopus, PubMed, and Web of Science (all databases) were searched by 2 reviewers until 29th October 2020. Articles were screened and narratively synthesized according to PRISMA-DTA guidelines based on predefined eligibility criteria. Articles that made direct reference test comparisons to human clinicians were evaluated using the MI-CLAIM checklist. The risk of bias was assessed by JBI-DTA critical appraisal, and certainty of the evidence was evaluated using the GRADE approach. Information regarding the quantification method of dental pain and disease, the conditional characteristics of both training and test data cohort in the machine learning, diagnostic outcomes, and diagnostic test comparisons with clinicians, where applicable, were extracted.

    Results: 34 eligible articles were found for data synthesis, of which 8 articles made direct reference comparisons to human clinicians. 7 papers scored over 13 (out of the evaluated 15 points) in the MI-CLAIM approach with all papers scoring 5+ (out of 7) in JBI-DTA appraisals. GRADE approach revealed serious risks of bias and inconsistencies with most studies containing more positive cases than their true prevalence in order to facilitate machine learning. Patient-perceived symptoms and clinical history were generally found to be less reliable than radiographs or histology for training accurate machine learning models. A low agreement level between clinicians training the models was suggested to have a negative impact on the prediction accuracy. Reference comparisons found nonspecialized clinicians with less than 3 years of experience to be disadvantaged against trained models.

    Conclusion: Machine learning in dental and orofacial healthcare has shown respectable results in diagnosing diseases with symptomatic pain and with improved future iterations and can be used as a diagnostic aid in the clinics. The current review did not internally analyze the machine learning models and their respective algorithms, nor consider the confounding variables and factors responsible for shaping the orofacial disorders responsible for eliciting pain.

    Matched MeSH terms: Algorithms
  12. Al-Quraishi MS, Ishak AJ, Ahmad SA, Hasan MK, Al-Qurishi M, Ghapanchizadeh H, et al.
    Med Biol Eng Comput, 2017 May;55(5):747-758.
    PMID: 27484411 DOI: 10.1007/s11517-016-1551-4
    Electromyography (EMG)-based control is the core of prostheses, orthoses, and other rehabilitation devices in recent research. Nonetheless, EMG is difficult to use as a control signal given the complex nature of the signal. To overcome this problem, the researchers employed a pattern recognition technique. EMG pattern recognition mainly involves four stages: signal detection, preprocessing feature extraction, dimensionality reduction, and classification. In particular, the success of any pattern recognition technique depends on the feature extraction stage. In this study, a modified time-domain features set and logarithmic transferred time-domain features (LTD) were evaluated and compared with other traditional time-domain features set (TTD). Three classifiers were employed to assess the two feature sets, namely linear discriminant analysis (LDA), k nearest neighborhood, and Naïve Bayes. Results indicated the superiority of the new time-domain feature set LTD, on conventional time-domain features TTD with the average classification accuracy of 97.23 %. In addition, the LDA classifier outperformed the other two classifiers considered in this study.
    Matched MeSH terms: Algorithms
  13. Agatonovic-Kustrin S, Alany RG
    Pharm Res, 2001 Jul;18(7):1049-55.
    PMID: 11496944
    PURPOSE: A genetic neural network (GNN) model was developed to predict the phase behavior of microemulsion (ME), lamellar liquid crystal (LC), and coarse emulsion forming systems (W/O EM and O/W EM) depending on the content of separate components in the system and cosurfactant nature.

    METHOD: Eight pseudoternary phase triangles, containing ethyl oleate as the oil component and a mixture of two nonionic surfactants and n-alcohol or 1,2-alkanediol as a cosurfactant, were constructed and used for training, testing, and validation purposes. A total of 21 molecular descriptors were calculated for each cosurfactant. A genetic algorithm was used to select important molecular descriptors, and a supervised artificial neural network with two hidden layers was used to correlate selected descriptors and the weight ratio of components in the system with the observed phase behavior.

    RESULTS: The results proved the dominant role of the chemical composition, hydrophile-lipophile balance, length of hydrocarbon chain, molecular volume, and hydrocarbon volume of cosurfactant. The best GNN model, with 14 inputs and two hidden layers with 14 and 9 neurons, predicted the phase behavior for a new set of cosurfactants with 82.2% accuracy for ME, 87.5% for LC, 83.3% for the O/W EM, and 91.5% for the W/O EM region.

    CONCLUSIONS: This type of methodology can be applied in the evaluation of the cosurfactants for pharmaceutical formulations to minimize experimental effort.

    Matched MeSH terms: Algorithms*
  14. Almahdi EM, Zaidan AA, Zaidan BB, Alsalem MA, Albahri OS, Albahri AS
    J Med Syst, 2019 Jun 06;43(7):219.
    PMID: 31172296 DOI: 10.1007/s10916-019-1339-9
    This study presents a prioritisation framework for mobile patient monitoring systems (MPMSs) based on multicriteria analysis in architectural components. This framework selects the most appropriate system amongst available MPMSs for the telemedicine environment. Prioritisation of MPMSs is a challenging task due to (a) multiple evaluation criteria, (b) importance of criteria, (c) data variation and (d) unmeasurable values. The secondary data presented as the decision evaluation matrix include six systems (namely, Yale-National Aeronautics and Space Administration (NASA), advanced health and disaster aid network, personalised health monitoring, CMS, MobiHealth and NTU) as alternatives and 13 criteria (namely, supported number of sensors, sensor front-end (SFE) communication, SFE to mobile base unit (MBU) communications, display of biosignals on the MBU, storage of biosignals on the MBU, intra-body area network (BAN) communication problems, extra-BAN communication problems, extra-BAN communication technology, extra-BAN communication protocols, back-end system communication technology, intended geographic area of use, end-to-end security and reported trial problems) based on the architectural components of MPMSs. These criteria are adopted from the most relevant studies and are found to be applicable to this study. The prioritisation framework is developed in three stages. (1) The unmeasurable values of the MPMS evaluation criteria in the adopted decision evaluation matrix based on expert opinion are represented by using the best-worst method (BWM). (2) The importance of the evaluation criteria based on the architectural components of the MPMS is determined by using the BWM. (3) The VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) method is utilised to rank the MPMSs according to the determined importance of the evaluation criteria and the adopted decision matrix. For validation, mean ± standard deviation is used to verify the similarity of systematic prioritisations objectively. The following results are obtained. (1) The BWM represents the unmeasurable values of the MPMS evaluation criteria. (2) The BWM is suitable for weighing the evaluation criteria based on the architectural components of the MPMS. (3) VIKOR is suitable for solving the MPMS prioritisation problem. Moreover, the internal and external VIKOR group decision making are approximately the same, with the best MPMS being 'Yale-NASA' and the worst MPMS being 'NTU'. (4) For the objective validation, remarkable differences are observed between the group scores, which indicate the similarity of internal and external prioritisation results.
    Matched MeSH terms: Algorithms
  15. Raouf MA, Hashim F, Liew JT, Alezabi KA
    PLoS One, 2020;15(8):e0237386.
    PMID: 32790697 DOI: 10.1371/journal.pone.0237386
    The IEEE 802.11ah standard relies on the conventional distributed coordination function (DCF) as a backoff selection method. The DCF is utilized in the contention-based period of the newly introduced medium access control (MAC) mechanism, namely restricted access window (RAW). Despite various advantages of RAW, DCF still utilizes the legacy binary exponential backoff (BEB) algorithm, which suffers from a crucial disadvantage of being prone to high probability of collisions with high number of contending stations. To mitigate this issue, this paper investigates the possibility of replacing the existing exponential sequence (i.e., as in BEB) with a better pseudorandom sequence of integers. In particular, a new backoff algorithm, namely Pseudorandom Sequence Contention Algorithm (PRSCA) is proposed to update the CW size and minimize the collision probability. In addition, the proposed PRSCA incorporates a different approach of CW freezing mechanism and backoff stage reset process. An analytical model is derived for the proposed PRSCA and presented through a discrete 2-D Markov chain model. Performance evaluation demonstrates the efficiency of the proposed PRSCA in reducing collision probability and improving saturation throughput, network throughput, and access delay performance.
    Matched MeSH terms: Algorithms*
  16. Tayan O, Kabir MN, Alginahi YM
    ScientificWorldJournal, 2014;2014:514652.
    PMID: 25254247 DOI: 10.1155/2014/514652
    This paper addresses the problems and threats associated with verification of integrity, proof of authenticity, tamper detection, and copyright protection for digital-text content. Such issues were largely addressed in the literature for images, audio, and video, with only a few papers addressing the challenge of sensitive plain-text media under known constraints. Specifically, with text as the predominant online communication medium, it becomes crucial that techniques are deployed to protect such information. A number of digital-signature, hashing, and watermarking schemes have been proposed that essentially bind source data or embed invisible data in a cover media to achieve its goal. While many such complex schemes with resource redundancies are sufficient in offline and less-sensitive texts, this paper proposes a hybrid approach based on zero-watermarking and digital-signature-like manipulations for sensitive text documents in order to achieve content originality and integrity verification without physically modifying the cover text in anyway. The proposed algorithm was implemented and shown to be robust against undetected content modifications and is capable of confirming proof of originality whilst detecting and locating deliberate/nondeliberate tampering. Additionally, enhancements in resource utilisation and reduced redundancies were achieved in comparison to traditional encryption-based approaches. Finally, analysis and remarks are made about the current state of the art, and future research issues are discussed under the given constraints.
    Matched MeSH terms: Algorithms*
  17. Amir S. A. Hamzah, Ali H. M. Murid
    MATEMATIKA, 2018;34(2):293-311.
    MyJurnal
    This study presents a mathematical model examining wastewater pollutant removal through
    an oxidation pond treatment system. This model was developed to describe the reaction
    between microbe-based product mPHO (comprising Phototrophic bacteria (PSB)), dissolved
    oxygen (DO) and pollutant namely chemical oxygen demand (COD). It consists
    of coupled advection-diffusion-reaction equations for the microorganism (PSB), DO and
    pollutant (COD) concentrations, respectively. The coupling of these equations occurred
    due to the reactions between PSB, DO and COD to produce harmless compounds. Since
    the model is nonlinear partial differential equations (PDEs), coupled, and dynamic, computational
    algorithm with a specific numerical method, which is implicit Crank-Nicolson
    method, was employed to simulate the dynamical behaviour of the system. Furthermore,
    numerical results revealed that the proposed model demonstrated high accuracy when
    compared to the experimental data.
    Matched MeSH terms: Algorithms
  18. Al-Fakih AM, Algamal ZY, Lee MH, Aziz M, Ali HTM
    SAR QSAR Environ Res, 2019 Jun;30(6):403-416.
    PMID: 31122062 DOI: 10.1080/1062936X.2019.1607899
    Time-varying binary gravitational search algorithm (TVBGSA) is proposed for predicting antidiabetic activity of 134 dipeptidyl peptidase-IV (DPP-IV) inhibitors. To improve the performance of the binary gravitational search algorithm (BGSA) method, we propose a dynamic time-varying transfer function. A new control parameter,
    μ
    , is added in the original transfer function as a time-varying variable. The TVBGSA-based model was internally and externally validated based on

    Q


    int


    2

    ,

    Q



    L
    G
    O



    2

    ,

    Q



    B
    o
    o
    t



    2

    ,


    M
    S






    E





    t
    r
    a
    i
    n





    ,

    Q



    e
    x
    t



    2

    ,


    M
    S






    E





    t
    e
    s
    t





    , Y-randomization test, and applicability domain evaluation. The validation results indicate that the proposed TVBGSA model is robust and not due to chance correlation. The descriptor selection and prediction performance of TVBGSA outperform BGSA method. TVBGSA shows higher

    Q


    int


    2

    of 0.957,

    Q



    L
    G
    O



    2

    of 0.951,

    Q



    B
    o
    o
    t



    2

    of 0.954,

    Q



    e
    x
    t



    2

    of 0.938, and lower


    M
    S






    E





    t
    r
    a
    i
    n





    and


    M
    S






    E





    t
    e
    s
    t





    compared to obtained results by BGSA, indicating the best prediction performance of the proposed TVBGSA model. The results clearly reveal that the proposed TVBGSA method is useful for constructing reliable and robust QSARs for predicting antidiabetic activity of DPP-IV inhibitors prior to designing and experimental synthesizing of new DPP-IV inhibitors.
    Matched MeSH terms: Algorithms
  19. Al-Fakih AM, Algamal ZY, Lee MH, Aziz M, Ali HTM
    SAR QSAR Environ Res, 2019 Feb;30(2):131-143.
    PMID: 30734580 DOI: 10.1080/1062936X.2019.1568298
    An improved binary differential search (improved BDS) algorithm is proposed for QSAR classification of diverse series of antimicrobial compounds against Candida albicans inhibitors. The transfer functions is the most important component of the BDS algorithm, and converts continuous values of the donor into discrete values. In this paper, the eight types of transfer functions are investigated to verify their efficiency in improving BDS algorithm performance in QSAR classification. The performance was evaluated using three metrics: classification accuracy (CA), geometric mean of sensitivity and specificity (G-mean), and area under the curve. The Kruskal-Wallis test was also applied to show the statistical differences between the functions. Two functions, S1 and V4, show the best classification achievement, with a slightly better performance of V4 than S1. The V4 function takes the lowest iterations and selects the fewest descriptors. In addition, the V4 function yields the best CA and G-mean of 98.07% and 0.977%, respectively. The results prove that the V4 transfer function significantly improves the performance of the original BDS.
    Matched MeSH terms: Algorithms*
  20. Algamal ZY, Qasim MK, Lee MH, Ali HTM
    SAR QSAR Environ Res, 2020 Nov;31(11):803-814.
    PMID: 32938208 DOI: 10.1080/1062936X.2020.1818616
    High-dimensionality is one of the major problems which affect the quality of the quantitative structure-activity relationship (QSAR) modelling. Obtaining a reliable QSAR model with few descriptors is an essential procedure in chemometrics. The binary grasshopper optimization algorithm (BGOA) is a new meta-heuristic optimization algorithm, which has been used successfully to perform feature selection. In this paper, four new transfer functions were adapted to improve the exploration and exploitation capability of the BGOA in QSAR modelling of influenza A viruses (H1N1). The QSAR model with these new quadratic transfer functions was internally and externally validated based on MSEtrain, Y-randomization test, MSEtest, and the applicability domain (AD). The validation results indicate that the model is robust and not due to chance correlation. In addition, the results indicate that the descriptor selection and prediction performance of the QSAR model for training dataset outperform the other S-shaped and V-shaped transfer functions. QSAR model using quadratic transfer function shows the lowest MSEtrain. For the test dataset, proposed QSAR model shows lower value of MSEtest compared with the other methods, indicating its higher predictive ability. In conclusion, the results reveal that the proposed QSAR model is an efficient approach for modelling high-dimensional QSAR models and it is useful for the estimation of IC50 values of neuraminidase inhibitors that have not been experimentally tested.
    Matched MeSH terms: Algorithms*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links