Displaying publications 21 - 40 of 1459 in total

Abstract:
Sort:
  1. Sachithanandan A, Lockman H, Azman RR, Tho LM, Ban EZ, Ramon V
    Med J Malaysia, 2024 Jan;79(1):9-14.
    PMID: 38287751
    INTRODUCTION: The poor prognosis of lung cancer has been largely attributed to the fact that most patients present with advanced stage disease. Although low dose computed tomography (LDCT) is presently considered the optimal imaging modality for lung cancer screening, its use has been hampered by cost and accessibility. One possible approach to facilitate lung cancer screening is to implement a risk-stratification step with chest radiography, given its ease of access and affordability. Furthermore, implementation of artificial-intelligence (AI) in chest radiography is expected to improve the detection of indeterminate pulmonary nodules, which may represent early lung cancer.

    MATERIALS AND METHODS: This consensus statement was formulated by a panel of five experts of primary care and specialist doctors. A lung cancer screening algorithm was proposed for implementation locally.

    RESULTS: In an earlier pilot project collaboration, AI-assisted chest radiography had been incorporated into lung cancer screening in the community. Preliminary experience in the pilot project suggests that the system is easy to use, affordable and scalable. Drawing from experience with the pilot project, a standardised lung cancer screening algorithm using AI in Malaysia was proposed. Requirements for such a screening programme, expected outcomes and limitations of AI-assisted chest radiography were also discussed.

    CONCLUSION: The combined strategy of AI-assisted chest radiography and complementary LDCT imaging has great potential in detecting early-stage lung cancer in a timely manner, and irrespective of risk status. The proposed screening algorithm provides a guide for clinicians in Malaysia to participate in screening efforts.

    Matched MeSH terms: Algorithms
  2. Tukkee AS, Bin Abdul Wahab NI, Binti Mailah NF, Bin Hassan MK
    PLoS One, 2024;19(2):e0298094.
    PMID: 38330067 DOI: 10.1371/journal.pone.0298094
    Recently, global interest in organizing the functioning of renewable energy resources (RES) through microgrids (MG) has developed, as a unique approach to tackle technical, economic, and environmental difficulties. This study proposes implementing a developed Distributable Resource Management strategy (DRMS) in hybrid Microgrid systems to reduce total net percent cost (TNPC), energy loss (Ploss), and gas emissions (GEM) while taking the cost-benefit index (CBI) and loss of power supply probability (LPSP) as operational constraints. Grey Wolf Optimizer (GWO) was utilized to find the optimal size of the hybrid Microgrid components and calculate the multi-objective function with and without the proposed management method. In addition, a detailed sensitivity analysis of numerous economic and technological parameters was performed to assess system performance. The proposed strategy reduced the system's total net present cost, power loss, and emissions by (1.06%), (8.69%), and (17.19%), respectively compared to normal operation. Firefly Algorithm (FA) and Particle Swarm Optimization (PSO) techniques were used to verify the results. This study gives a more detailed plan for evaluating the effectiveness of hybrid Microgrid systems from a technical, economic, and environmental perspective.
    Matched MeSH terms: Algorithms*
  3. Husnain AU, Mokhtar N, Mohamed Shah NB, Dahari MB, Azmi AA, Iwahashi M
    PLoS One, 2024;19(2):e0296969.
    PMID: 38394180 DOI: 10.1371/journal.pone.0296969
    There are three primary objectives of this work; first: to establish a gas concentration map; second: to estimate the point of emission of the gas; and third: to generate a path from any location to the point of emission for UAVs or UGVs. A mountable array of MOX sensors was developed so that the angles and distances among the sensors, alongside sensors data, were utilized to identify the influx of gas plumes. Gas dispersion experiments under indoor conditions were conducted to train machine learning algorithms to collect data at numerous locations and angles. Taguchi's orthogonal arrays for experiment design were used to identify the gas dispersion locations. For the second objective, the data collected after pre-processing was used to train an off-policy, model-free reinforcement learning agent with a Q-learning policy. After finishing the training from the training data set, Q-learning produces a table called the Q-table. The Q-table contains state-action pairs that generate an autonomous path from any point to the source from the testing dataset. The entire process is carried out in an obstacle-free environment, and the whole scheme is designed to be conducted in three modes: search, track, and localize. The hyperparameter combinations of the RL agent were evaluated through trial-and-error technique and it was found that ε = 0.9, γ = 0.9 and α = 0.9 was the fastest path generating combination that took 1258.88 seconds for training and 6.2 milliseconds for path generation. Out of 31 unseen scenarios, the trained RL agent generated successful paths for all the 31 scenarios, however, the UAV was able to reach successfully on the gas source in 23 scenarios, producing a success rate of 74.19%. The results paved the way for using reinforcement learning techniques to be used as autonomous path generation of unmanned systems alongside the need to explore and improve the accuracy of the reported results as future works.
    Matched MeSH terms: Algorithms*
  4. T A, G G, P AMD, Assaad M
    PLoS One, 2024;19(3):e0299653.
    PMID: 38478485 DOI: 10.1371/journal.pone.0299653
    Mechanical ventilation techniques are vital for preserving individuals with a serious condition lives in the prolonged hospitalization unit. Nevertheless, an imbalance amid the hospitalized people demands and the respiratory structure could cause to inconsistencies in the patient's inhalation. To tackle this problem, this study presents an Iterative Learning PID Controller (ILC-PID), a unique current cycle feedback type controller that helps in gaining the correct pressure and volume. The paper also offers a clear and complete examination of the primarily efficient neural approach for generating optimal inhalation strategies. Moreover, machine learning-based classifiers are used to evaluate the precision and performance of the ILC-PID controller. These classifiers able to forecast and choose the perfect type for various inhalation modes, eliminating the likelihood that patients will require mechanical ventilation. In pressure control, the suggested accurate neural categorization exhibited an average accuracy rate of 88.2% in continuous positive airway pressure (CPAP) mode and 91.7% in proportional assist ventilation (PAV) mode while comparing with the other classifiers like ensemble classifier has reduced accuracy rate of 69.5% in CPAP mode and also 71.7% in PAV mode. An average accuracy of 78.9% rate in other classifiers compared to neutral network in CPAP. The neural model had an typical range of 81.6% in CPAP mode and 84.59% in PAV mode for 20 cm H2O of volume created by the neural network classifier in the volume investigation. Compared to the other classifiers, an average of 72.17% was in CPAP mode, and 77.83% was in PAV mode in volume control. Different approaches, such as decision trees, optimizable Bayes trees, naive Bayes trees, nearest neighbour trees, and an ensemble of trees, were also evaluated regarding the accuracy by confusion matrix concept, training duration, specificity, sensitivity, and F1 score.
    Matched MeSH terms: Algorithms
  5. Zafar F, Malik SA, Ali T, Daraz A, Afzal AR, Bhatti F, et al.
    PLoS One, 2024;19(2):e0298624.
    PMID: 38354203 DOI: 10.1371/journal.pone.0298624
    In this paper, we propose two different control strategies for the position control of the ball of the ball and beam system (BBS). The first control strategy uses the proportional integral derivative-second derivative with a proportional integrator PIDD2-PI. The second control strategy uses the tilt integral derivative with filter (TID-F). The designed controllers employ two distinct metaheuristic computation techniques: grey wolf optimization (GWO) and whale optimization algorithm (WOA) for the parameter tuning. We evaluated the dynamic and steady-state performance of the proposed control strategies using four performance indices. In addition, to analyze the robustness of proposed control strategies, a comprehensive comparison has been performed with a variety of controllers, including tilt integral-derivative (TID), fractional order proportional integral derivative (FOPID), integral-proportional derivative (I-PD), proportional integral-derivative (PI-D), and proportional integral proportional derivative (PI-PD). By comparing different test cases, including the variation in the parameters of the BBS with disturbance, we examine step response, set point tracking, disturbance rejection analysis, and robustness of proposed control strategies. The comprehensive comparison of results shows that WOA-PIDD2-PI-ISE and GWO-TID-F- ISE perform superior. Moreover, the proposed control strategies yield oscillation-free, stable, and quick response, which confirms the robustness of the proposed control strategies to the disturbance, parameter variation of BBS, and tracking performance. The practical implementation of the proposed controllers can be in the field of under actuated mechanical systems (UMS), robotics and industrial automation. The proposed control strategies are successfully tested in MATLAB simulation.
    Matched MeSH terms: Algorithms*
  6. Mandala S, Rizal A, Adiwijaya, Nurmaini S, Suci Amini S, Almayda Sudarisman G, et al.
    PLoS One, 2024;19(4):e0297551.
    PMID: 38593145 DOI: 10.1371/journal.pone.0297551
    Arrhythmia is a life-threatening cardiac condition characterized by irregular heart rhythm. Early and accurate detection is crucial for effective treatment. However, single-lead electrocardiogram (ECG) methods have limited sensitivity and specificity. This study propose an improved ensemble learning approach for arrhythmia detection using multi-lead ECG data. Proposed method, based on a boosting algorithm, namely Fine Tuned Boosting (FTBO) model detects multiple arrhythmia classes. For the feature extraction, introduce a new technique that utilizes a sliding window with a window size of 5 R-peaks. This study compared it with other models, including bagging and stacking, and assessed the impact of parameter tuning. Rigorous experiments on the MIT-BIH arrhythmia database focused on Premature Ventricular Contraction (PVC), Atrial Premature Contraction (PAC), and Atrial Fibrillation (AF) have been performed. The results showed that the proposed method achieved high sensitivity, specificity, and accuracy for all three classes of arrhythmia. It accurately detected Atrial Fibrillation (AF) with 100% sensitivity and specificity. For Premature Ventricular Contraction (PVC) detection, it achieved 99% sensitivity and specificity in both leads. Similarly, for Atrial Premature Contraction (PAC) detection, proposed method achieved almost 96% sensitivity and specificity in both leads. The proposed method shows great potential for early arrhythmia detection using multi-lead ECG data.
    Matched MeSH terms: Algorithms
  7. Ismail AM, Ab Hamid SH, Abdul Sani A, Mohd Daud NN
    PLoS One, 2024;19(4):e0299585.
    PMID: 38603718 DOI: 10.1371/journal.pone.0299585
    The performance of the defect prediction model by using balanced and imbalanced datasets makes a big impact on the discovery of future defects. Current resampling techniques only address the imbalanced datasets without taking into consideration redundancy and noise inherent to the imbalanced datasets. To address the imbalance issue, we propose Kernel Crossover Oversampling (KCO), an oversampling technique based on kernel analysis and crossover interpolation. Specifically, the proposed technique aims to generate balanced datasets by increasing data diversity in order to reduce redundancy and noise. KCO first represents multidimensional features into two-dimensional features by employing Kernel Principal Component Analysis (KPCA). KCO then divides the plotted data distribution by deploying spectral clustering to select the best region for interpolation. Lastly, KCO generates the new defect data by interpolating different data templates within the selected data clusters. According to the prediction evaluation conducted, KCO consistently produced F-scores ranging from 21% to 63% across six datasets, on average. According to the experimental results presented in this study, KCO provides more effective prediction performance than other baseline techniques. The experimental results show that KCO within project and cross project predictions especially consistently achieve higher performance of F-score results.
    Matched MeSH terms: Algorithms*
  8. Tian X, Tian Z, Khatib SFA, Wang Y
    PLoS One, 2024;19(4):e0300195.
    PMID: 38625972 DOI: 10.1371/journal.pone.0300195
    Internet finance has permeated into myriad households, bringing about lifestyle convenience alongside potential risks. Presently, internet finance enterprises are progressively adopting machine learning and other artificial intelligence methods for risk alertness. What is the current status of the application of various machine learning models and algorithms across different institutions? Is there an optimal machine learning algorithm suited for the majority of internet finance platforms and application scenarios? Scholars have embarked on a series of studies addressing these questions; however, the focus predominantly lies in comparing different algorithms within specific platforms and contexts, lacking a comprehensive discourse and summary on the utilization of machine learning in this domain. Thus, based on the data from Web of Science and Scopus databases, this paper conducts a systematic literature review on all aspects of machine learning in internet finance risk in recent years, based on publications trends, geographical distribution, literature focus, machine learning models and algorithms, and evaluations. The research reveals that machine learning, as a nascent technology, whether through basic algorithms or intricate algorithmic combinations, has made significant strides compared to traditional credit scoring methods in predicting accuracy, time efficiency, and robustness in internet finance risk management. Nonetheless, there exist noticeable disparities among different algorithms, and factors such as model structure, sample data, and parameter settings also influence prediction accuracy, although generally, updated algorithms tend to achieve higher accuracy. Consequently, there is no one-size-fits-all approach applicable to all platforms; each platform should enhance its machine learning models and algorithms based on its unique characteristics, data, and the development of AI technology, starting from key evaluation indicators to mitigate internet finance risks.
    Matched MeSH terms: Algorithms
  9. Su C, Wei J, Lei Y, Xuan H, Li J
    PLoS One, 2024;19(4):e0298261.
    PMID: 38598458 DOI: 10.1371/journal.pone.0298261
    In the realm of targeted advertising, the demand for precision is paramount, and the traditional centralized machine learning paradigm fails to address this necessity effectively. Two critical challenges persist in the current advertising ecosystem: the data privacy concerns leading to isolated data islands and the complexity in handling non-Independent and Identically Distributed (non-IID) data and concept drift due to the specificity and diversity in user behavior data. Current federated learning frameworks struggle to overcome these hurdles satisfactorily. This paper introduces Fed-GANCC, an innovative federated learning framework that synergizes Generative Adversarial Networks (GANs) and Group Clustering. The framework incorporates a user data augmentation algorithm predicated on adversarial generative networks to enrich user behavior data, curtail the impact of non-uniform data distribution, and enhance the applicability of the global machine learning model. Unlike traditional approaches, our framework offers user data augmentation algorithms based on adversarial generative networks, which not only enriches user behavior data but also reduces the challenges posed by non-uniform data distribution, thereby enhancing the applicability of the global machine learning (ML) model. The effectiveness of Fed-GANCC is distinctly showcased through experimental results, outperforming contemporary methods like FED-AVG and FED-SGD in terms of accuracy, loss value, and receiver operating characteristic (ROC) indicators within the same computing time. Experimental results vindicate the effectiveness of Fed-GANCC, revealing substantial enhancements in accuracy, loss value, and receiver operating characteristic (ROC) metrics compared to FED-AVG and FED-SGD given the same computational time. These outcomes underline Fed-GANCC's exceptional prowess in mitigating issues such as isolated data islands, non-IID data, and concept drift. With its novel approach to addressing the prevailing challenges in targeted advertising such as isolated data islands, non-IID data, and concept drift, the Fed-GANCC framework stands as a benchmark, paving the way for future advancements in federated learning solutions tailored for the advertising domain. The Fed-GANCC framework promises to offer pivotal insights for the future development of efficient and advanced federated learning solutions for targeted advertising.
    Matched MeSH terms: Algorithms
  10. Jackson-Morris A, Sembajwe R, Mustapha FI, Chandran A, Niyonsenga SP, Gishoma C, et al.
    Glob Health Action, 2023 Dec 31;16(1):2157542.
    PMID: 36692486 DOI: 10.1080/16549716.2022.2157542
    BACKGROUND: In 2019, the World Health Organization recognised diabetes as a clinically and pathophysiologically heterogeneous set of related diseases. Little is currently known about the diabetes phenotypes in the population of low- and middle-income countries (LMICs), yet identifying their different risks and aetiology has great potential to guide the development of more effective, tailored prevention and treatment.

    OBJECTIVES: This study reviewed the scope of diabetes datasets, health information ecosystems, and human resource capacity in four countries to assess whether a diabetes phenotyping algorithm (developed under a companion study) could be successfully applied.

    METHODS: The capacity assessment was undertaken with four countries: Trinidad, Malaysia, Kenya, and Rwanda. Diabetes programme staff completed a checklist of available diabetes data variables and then participated in semi-structured interviews about Health Information System (HIS) ecosystem conditions, diabetes programme context, and human resource needs. Descriptive analysis was undertaken.

    RESULTS: Only Malaysia collected the full set of the required diabetes data for the diabetes algorithm, although all countries did collect the required diabetes complication data. An HIS ecosystem existed in all settings, with variations in data hosting and sharing. All countries had access to HIS or ICT support, and epidemiologists or biostatisticians to support dataset preparation and algorithm application.

    CONCLUSIONS: Malaysia was found to be most ready to apply the phenotyping algorithm. A fundamental impediment in the other settings was the absence of several core diabetes data variables. Additionally, if countries digitise diabetes data collection and centralise diabetes data hosting, this will simplify dataset preparation for algorithm application. These issues reflect common LMIC health systems' weaknesses in relation to diabetes care, and specifically highlight the importance of investment in improving diabetes data, which can guide population-tailored prevention and management approaches.

    Matched MeSH terms: Algorithms
  11. Ong P, Jian J, Li X, Zou C, Yin J, Ma G
    PMID: 37356390 DOI: 10.1016/j.saa.2023.123037
    The proliferation of pathogenic fungi in sugarcane crops poses a significant threat to agricultural productivity and economic sustainability. Early identification and management of sugarcane diseases are therefore crucial to mitigate the adverse impacts of these pathogens. In this study, visible and near-infrared spectroscopy (380-1400 nm) combined with a novel wavelength selection method, referred to as modified flower pollination algorithm (MFPA), was utilized for sugarcane disease recognition. The selected wavelengths were incorporated into machine learning models, including Naïve Bayes, random forest, and support vector machine (SVM). The developed simplified SVM model, which utilized the MFPA wavelength selection method yielded the best performances, achieving a precision value of 0.9753, a sensitivity value of 0.9259, a specificity value of 0.9524, and an accuracy of 0.9487. These results outperformed those obtained by other wavelength selection approaches, including the selectivity ratio, variable importance in projection, and the baseline method of the flower pollination algorithm.
    Matched MeSH terms: Algorithms
  12. Rahman H, Khan AR, Sadiq T, Farooqi AH, Khan IU, Lim WH
    Tomography, 2023 Dec 05;9(6):2158-2189.
    PMID: 38133073 DOI: 10.3390/tomography9060169
    Computed tomography (CT) is used in a wide range of medical imaging diagnoses. However, the reconstruction of CT images from raw projection data is inherently complex and is subject to artifacts and noise, which compromises image quality and accuracy. In order to address these challenges, deep learning developments have the potential to improve the reconstruction of computed tomography images. In this regard, our research aim is to determine the techniques that are used for 3D deep learning in CT reconstruction and to identify the training and validation datasets that are accessible. This research was performed on five databases. After a careful assessment of each record based on the objective and scope of the study, we selected 60 research articles for this review. This systematic literature review revealed that convolutional neural networks (CNNs), 3D convolutional neural networks (3D CNNs), and deep learning reconstruction (DLR) were the most suitable deep learning algorithms for CT reconstruction. Additionally, two major datasets appropriate for training and developing deep learning systems were identified: 2016 NIH-AAPM-Mayo and MSCT. These datasets are important resources for the creation and assessment of CT reconstruction models. According to the results, 3D deep learning may increase the effectiveness of CT image reconstruction, boost image quality, and lower radiation exposure. By using these deep learning approaches, CT image reconstruction may be made more precise and effective, improving patient outcomes, diagnostic accuracy, and healthcare system productivity.
    Matched MeSH terms: Algorithms
  13. Ling L, Huang L, Wang J, Zhang L, Wu Y, Jiang Y, et al.
    Interdiscip Sci, 2023 Dec;15(4):560-577.
    PMID: 37160860 DOI: 10.1007/s12539-023-00570-2
    Soft subspace clustering (SSC), which analyzes high-dimensional data and applies various weights to each cluster class to assess the membership degree of each cluster to the space, has shown promising results in recent years. This method of clustering assigns distinct weights to each cluster class. By introducing spatial information, enhanced SSC algorithms improve the degree to which intraclass compactness and interclass separation are achieved. However, these algorithms are sensitive to noisy data and have a tendency to fall into local optima. In addition, the segmentation accuracy is poor because of the influence of noisy data. In this study, an SSC approach that is based on particle swarm optimization is suggested with the intention of reducing the interference caused by noisy data. The particle swarm optimization method is used to locate the best possible clustering center. Second, increasing the amount of geographical membership makes it possible to utilize the spatial information to quantify the link between different clusters in a more precise manner. In conclusion, the extended noise clustering method is implemented in order to maximize the weight. Additionally, the constraint condition of the weight is changed from the equality constraint to the boundary constraint in order to reduce the impact of noise. The methodology presented in this research works to reduce the amount of sensitivity the SSC algorithm has to noisy data. It is possible to demonstrate the efficacy of this algorithm by using photos with noise already present or by introducing noise to existing photographs. The revised SSC approach based on particle swarm optimization (PSO) is demonstrated to have superior segmentation accuracy through a number of trials; as a result, this work gives a novel method for the segmentation of noisy images.
    Matched MeSH terms: Algorithms*
  14. Vinothini R, Niranjana G, Yakub F
    J Digit Imaging, 2023 Dec;36(6):2480-2493.
    PMID: 37491543 DOI: 10.1007/s10278-023-00852-7
    The human respiratory system is affected when an individual is infected with COVID-19, which became a global pandemic in 2020 and affected millions of people worldwide. However, accurate diagnosis of COVID-19 can be challenging due to small variations in typical and COVID-19 pneumonia, as well as the complexities involved in classifying infection regions. Currently, various deep learning (DL)-based methods are being introduced for the automatic detection of COVID-19 using computerized tomography (CT) scan images. In this paper, we propose the pelican optimization algorithm-based long short-term memory (POA-LSTM) method for classifying coronavirus using CT scan images. The data preprocessing technique is used to convert raw image data into a suitable format for subsequent steps. Here, we develop a general framework called no new U-Net (nnU-Net) for region of interest (ROI) segmentation in medical images. We apply a set of heuristic guidelines derived from the domain to systematically optimize the ROI segmentation task, which represents the dataset's key properties. Furthermore, high-resolution net (HRNet) is a standard neural network design developed for feature extraction. HRNet chooses the top-down strategy over the bottom-up method after considering the two options. It first detects the subject, generates a bounding box around the object and then estimates the relevant feature. The POA is used to minimize the subjective influence of manually selected parameters and enhance the LSTM's parameters. Thus, the POA-LSTM is used for the classification process, achieving higher performance for each performance metric such as accuracy, sensitivity, F1-score, precision, and specificity of 99%, 98.67%, 98.88%, 98.72%, and 98.43%, respectively.
    Matched MeSH terms: Algorithms
  15. Fum WKS, Md Shah MN, Raja Aman RRA, Abd Kadir KA, Wen DW, Leong S, et al.
    Phys Eng Sci Med, 2023 Dec;46(4):1535-1552.
    PMID: 37695509 DOI: 10.1007/s13246-023-01317-5
    In fluoroscopy-guided interventions (FGIs), obtaining large quantities of labelled data for deep learning (DL) can be difficult. Synthetic labelled data can serve as an alternative, generated via pseudo 2D projections of CT volumetric data. However, contrasted vessels have low visibility in simple 2D projections of contrasted CT data. To overcome this, we propose an alternative method to generate fluoroscopy-like radiographs from contrasted head CT Angiography (CTA) volumetric data. The technique involves segmentation of brain tissue, bone, and contrasted vessels from CTA volumetric data, followed by an algorithm to adjust HU values, and finally, a standard ray-based projection is applied to generate the 2D image. The resulting synthetic images were compared to clinical fluoroscopy images for perceptual similarity and subject contrast measurements. Good perceptual similarity was demonstrated on vessel-enhanced synthetic images as compared to the clinical fluoroscopic images. Statistical tests of equivalence show that enhanced synthetic and clinical images have statistically equivalent mean subject contrast within 25% bounds. Furthermore, validation experiments confirmed that the proposed method for generating synthetic images improved the performance of DL models in certain regression tasks, such as localizing anatomical landmarks in clinical fluoroscopy images. Through enhanced pseudo 2D projection of CTA volume data, synthetic images with similar features to real clinical fluoroscopic images can be generated. The use of synthetic images as an alternative source for DL datasets represents a potential solution to the application of DL in FGIs procedures.
    Matched MeSH terms: Algorithms
  16. Lee CS, Abd Shukor SR
    Environ Sci Pollut Res Int, 2023 Dec;30(60):124790-124805.
    PMID: 36961637 DOI: 10.1007/s11356-023-26358-x
    The controllable intensified process has received immense attention from researchers in order to deliver the benefit of process intensification to be operated in a desired way to provide a more sustainable process toward reduction of environmental impact and improvement of intrinsic safety and process efficiency. Despite numerous studies on gain and phase margin approach on conventional process systems, it is yet to be tested on intensified systems as evidenced by the lack of available literature, to improve the controller performance and robustness. Thus, this paper proposed the exact gain and phase margin (EGPM) through analytical method to develop suitable controller design for intensified system using Proportional-Integral-Derivative (PID) controller formulation, and it was compared to conventional Direct Synthesis methods (DS), Internal Model Control (IMC), and Industrial IMC method in terms of the performance and stability analysis. Simulation results showed that EGPM method provides good setpoint tracking and disturbance rejection as compared to DS, IMC, and Industrial IMC while retaining overall performance stability as time delay increases. The Bode Stability Criterion was used to determine the stability of the open-loop transfer function of each method and the result demonstrated decrease in stability as time delay increases for controllers designed using DS, IMC, and Industrial IMC, and hence control performance degrades. However, the proposed EGPM controller maintains the overall robustness and control performance throughout the increase of time delay and outperform other controller design methods at higher time delay with [Formula: see text] uncertainty test. Additionally, the proposed EGPM controller design method provides overall superior control performance with lower overshoot and shorter rise time compared to other controllers when process time constant is smaller in magnitude ([Formula: see text]) than the instrumentation element, which is one of the major concerns during the design of intensified controllers, resulting overall system with a higher order. The desired selection of gain margin and phase margin were suggested at 2.5 to 4 and 60 °-70 [Formula: see text], respectively, for a wide range of control conditions for intensified processes where higher instrumentation dynamic would be possible to achieve robust control as well. The proposed EGPM method controller is thought to be a more reliable design strategy for maintaining the overall robustness and performance of higher order and complex systems that are highly affected by time delay and high dynamic response of intensified processes.
    Matched MeSH terms: Algorithms*
  17. Masood A, Hameed MM, Srivastava A, Pham QB, Ahmad K, Razali SFM, et al.
    Sci Rep, 2023 Nov 29;13(1):21057.
    PMID: 38030733 DOI: 10.1038/s41598-023-47492-z
    Fine particulate matter (PM2.5) is a significant air pollutant that drives the most chronic health problems and premature mortality in big metropolitans such as Delhi. In such a context, accurate prediction of PM2.5 concentration is critical for raising public awareness, allowing sensitive populations to plan ahead, and providing governments with information for public health alerts. This study applies a novel hybridization of extreme learning machine (ELM) with a snake optimization algorithm called the ELM-SO model to forecast PM2.5 concentrations. The model has been developed on air quality inputs and meteorological parameters. Furthermore, the ELM-SO hybrid model is compared with individual machine learning models, such as Support Vector Regression (SVR), Random Forest (RF), Extreme Learning Machines (ELM), Gradient Boosting Regressor (GBR), XGBoost, and a deep learning model known as Long Short-Term Memory networks (LSTM), in forecasting PM2.5 concentrations. The study results suggested that ELM-SO exhibited the highest level of predictive performance among the five models, with a testing value of squared correlation coefficient (R2) of 0.928, and root mean square error of 30.325 µg/m3. The study's findings suggest that the ELM-SO technique is a valuable tool for accurately forecasting PM2.5 concentrations and could help advance the field of air quality forecasting. By developing state-of-the-art air pollution prediction models that incorporate ELM-SO, it may be possible to understand better and anticipate the effects of air pollution on human health and the environment.
    Matched MeSH terms: Algorithms
  18. Tieng FYF, Abdullah-Zawawi MR, Md Shahri NAA, Mohamed-Hussein ZA, Lee LH, Mutalib NA
    Brief Bioinform, 2023 Nov 22;25(1).
    PMID: 38040490 DOI: 10.1093/bib/bbad421
    RNA biology has risen to prominence after a remarkable discovery of diverse functions of noncoding RNA (ncRNA). Most untranslated transcripts often exert their regulatory functions into RNA-RNA complexes via base pairing with complementary sequences in other RNAs. An interplay between RNAs is essential, as it possesses various functional roles in human cells, including genetic translation, RNA splicing, editing, ribosomal RNA maturation, RNA degradation and the regulation of metabolic pathways/riboswitches. Moreover, the pervasive transcription of the human genome allows for the discovery of novel genomic functions via RNA interactome investigation. The advancement of experimental procedures has resulted in an explosion of documented data, necessitating the development of efficient and precise computational tools and algorithms. This review provides an extensive update on RNA-RNA interaction (RRI) analysis via thermodynamic- and comparative-based RNA secondary structure prediction (RSP) and RNA-RNA interaction prediction (RIP) tools and their general functions. We also highlighted the current knowledge of RRIs and the limitations of RNA interactome mapping via experimental data. Then, the gap between RSP and RIP, the importance of RNA homologues, the relationship between pseudoknots, and RNA folding thermodynamics are discussed. It is hoped that these emerging prediction tools will deepen the understanding of RNA-associated interactions in human diseases and hasten treatment processes.
    Matched MeSH terms: Algorithms
  19. Gan RK, Uddin H, Gan AZ, Yew YY, González PA
    Sci Rep, 2023 Nov 21;13(1):20350.
    PMID: 37989755 DOI: 10.1038/s41598-023-46986-0
    Since its initial launching, ChatGPT has gained significant attention from the media, with many claiming that ChatGPT's arrival is a transformative milestone in the advancement of the AI revolution. Our aim was to assess the performance of ChatGPT before and after teaching the triage of mass casualty incidents by utilizing a validated questionnaire specifically designed for such scenarios. In addition, we compared the triage performance between ChatGPT and medical students. Our cross-sectional study employed a mixed-methods analysis to assess the performance of ChatGPT in mass casualty incident triage, pre- and post-teaching of Simple Triage And Rapid Treatment (START) triage. After teaching the START triage algorithm, ChatGPT scored an overall triage accuracy of 80%, with only 20% of cases being over-triaged. The mean accuracy of medical students on the same questionnaire yielded 64.3%. Qualitative analysis on pre-determined themes on 'walking-wounded', 'respiration', 'perfusion', and 'mental status' on ChatGPT showed similar performance in pre- and post-teaching of START triage. Additional themes on 'disclaimer', 'prediction', 'management plan', and 'assumption' were identified during the thematic analysis. ChatGPT exhibited promising results in effectively responding to mass casualty incident questionnaires. Nevertheless, additional research is necessary to ensure its safety and efficacy before clinical implementation.
    Matched MeSH terms: Algorithms
  20. Ong SQ, Isawasan P, Ngesom AMM, Shahar H, Lasim AM, Nair G
    Sci Rep, 2023 Nov 05;13(1):19129.
    PMID: 37926755 DOI: 10.1038/s41598-023-46342-2
    Machine learning algorithms (ML) are receiving a lot of attention in the development of predictive models for monitoring dengue transmission rates. Previous work has focused only on specific weather variables and algorithms, and there is still a need for a model that uses more variables and algorithms that have higher performance. In this study, we use vector indices and meteorological data as predictors to develop the ML models. We trained and validated seven ML algorithms, including an ensemble ML method, and compared their performance using the receiver operating characteristic (ROC) with the area under the curve (AUC), accuracy and F1 score. Our results show that an ensemble ML such as XG Boost, AdaBoost and Random Forest perform better than the logistics regression, Naïve Bayens, decision tree, and support vector machine (SVM), with XGBoost having the highest AUC, accuracy and F1 score. Analysis of the importance of the variables showed that the container index was the least important. By removing this variable, the ML models improved their performance by at least 6% in AUC and F1 score. Our result provides a framework for future studies on the use of predictive models in the development of an early warning system.
    Matched MeSH terms: Algorithms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links