Displaying publications 1 - 20 of 1458 in total

Abstract:
Sort:
  1. Han F, Hessen AS, Amari A, Elboughdiri N, Zahmatkesh S
    Environ Res, 2024 Mar 15;245:117972.
    PMID: 38141913 DOI: 10.1016/j.envres.2023.117972
    Metal-organic framework (MOF)--based composites have received significant attention in a variety of applications, including pollutant adsorption processes. The current investigation was designed to model, forecast, and optimize heavy metal (Cu2+) removal from wastewater using a MOF nanocomposite. This work has been modeled by response surface methodology (RSM) and artificial neural network (ANN) algorithms. In addition, the optimization of the mentioned factors has been performed through the RSM method to find the optimal conditions. The findings show that RSM and ANN can accurately forecast the adsorption process's the Cu2+ removal efficiency (RE). The maximum values of RE are achieved at the highest value of time (150 min), the highest value of adsorbent dosage (0.008 g), and the highest value of pH (=6). The R2 values obtained were 0.9995, 0.9992, and 0.9996 for ANN modeling of adsorption capacity based on different adsorbent dosages, Cu2+ solution pHs, and different ion concentrations, respectively. The ANN demonstrated a high level of accuracy in predicting the local minima of the graph. In addition, the RSM optimization results showed that the optimum mode for RE occurred at an adsorbent dosage value of 0.007 g and a time value of 144.229 min.
    Matched MeSH terms: Algorithms
  2. Cheng J, Wang H, Wei S, Mei J, Liu F, Zhang G
    Comput Biol Med, 2024 Mar;170:108000.
    PMID: 38232453 DOI: 10.1016/j.compbiomed.2024.108000
    Alzheimer's disease (AD) is a neurodegenerative disease characterized by various pathological changes. Utilizing multimodal data from Fluorodeoxyglucose positron emission tomography(FDG-PET) and Magnetic Resonance Imaging(MRI) of the brain can offer comprehensive information about the lesions from different perspectives and improve the accuracy of prediction. However, there are significant differences in the feature space of multimodal data. Commonly, the simple concatenation of multimodal features can cause the model to struggle in distinguishing and utilizing the complementary information between different modalities, thus affecting the accuracy of predictions. Therefore, we propose an AD prediction model based on de-correlation constraint and multi-modal feature interaction. This model consists of the following three parts: (1) The feature extractor employs residual connections and attention mechanisms to capture distinctive lesion features from FDG-PET and MRI data within their respective modalities. (2) The de-correlation constraint function enhances the model's capacity to extract complementary information from different modalities by reducing the feature similarity between them. (3) The mutual attention feature fusion module interacts with the features within and between modalities to enhance the modal-specific features and adaptively adjust the weights of these features based on information from other modalities. The experimental results on ADNI database demonstrate that the proposed model achieves a prediction accuracy of 86.79% for AD, MCI and NC, which is higher than the existing multi-modal AD prediction models.
    Matched MeSH terms: Algorithms
  3. Zhang K, Ting HN, Choo YM
    Comput Methods Programs Biomed, 2024 Mar;245:108043.
    PMID: 38306944 DOI: 10.1016/j.cmpb.2024.108043
    BACKGROUND AND OBJECTIVE: Conflict may happen when more than one classifier is used to perform prediction or classification. The recognition model error leads to conflicting evidence. These conflicts can cause decision errors in a baby cry recognition and further decrease its recognition accuracy. Thus, the objective of this study is to propose a method that can effectively minimize the conflict among deep learning models and improve the accuracy of baby cry recognition.

    METHODS: An improved Dempster-Shafer evidence theory (DST) based on Wasserstein distance and Deng entropy was proposed to reduce the conflicts among the results by combining the credibility degree between evidence and the uncertainty degree of evidence. To validate the effectiveness of the proposed method, examples were analyzed, and applied in a baby cry recognition. The Whale optimization algorithm-Variational mode decomposition (WOA-VMD) was used to optimally decompose the baby cry signals. The deep features of decomposed components were extracted using the VGG16 model. Long Short-Term Memory (LSTM) models were used to classify baby cry signals. An improved DST decision method was used to obtain the decision fusion.

    RESULTS: The proposed fusion method achieves an accuracy of 90.15% in classifying three types of baby cry. Improvement between 2.90% and 4.98% was obtained over the existing DST fusion methods. Recognition accuracy was improved by between 5.79% and 11.53% when compared to the latest methods used in baby cry recognition.

    CONCLUSION: The proposed method optimally decomposes baby cry signal, effectively reduces the conflict among the results of deep learning models and improves the accuracy of baby cry recognition.

    Matched MeSH terms: Algorithms*
  4. Peng P, Wu D, Huang LJ, Wang J, Zhang L, Wu Y, et al.
    Interdiscip Sci, 2024 Mar;16(1):39-57.
    PMID: 37486420 DOI: 10.1007/s12539-023-00580-0
    Breast cancer is commonly diagnosed with mammography. Using image segmentation algorithms to separate lesion areas in mammography can facilitate diagnosis by doctors and reduce their workload, which has important clinical significance. Because large, accurately labeled medical image datasets are difficult to obtain, traditional clustering algorithms are widely used in medical image segmentation as an unsupervised model. Traditional unsupervised clustering algorithms have limited learning knowledge. Moreover, some semi-supervised fuzzy clustering algorithms cannot fully mine the information of labeled samples, which results in insufficient supervision. When faced with complex mammography images, the above algorithms cannot accurately segment lesion areas. To address this, a semi-supervised fuzzy clustering based on knowledge weighting and cluster center learning (WSFCM_V) is presented. According to prior knowledge, three learning modes are proposed: a knowledge weighting method for cluster centers, Euclidean distance weights for unlabeled samples, and learning from the cluster centers of labeled sample sets. These strategies improve the clustering performance. On real breast molybdenum target images, the WSFCM_V algorithm is compared with currently popular semi-supervised and unsupervised clustering algorithms. WSFCM_V has the best evaluation index values. Experimental results demonstrate that compared with the existing clustering algorithms, WSFCM_V has a higher segmentation accuracy than other clustering algorithms, both for larger lesion regions like tumor areas and for smaller lesion areas like calcification point areas.
    Matched MeSH terms: Algorithms
  5. Tan SL, Selvachandran G, Ding W, Paramesran R, Kotecha K
    Interdiscip Sci, 2024 Mar;16(1):16-38.
    PMID: 37962777 DOI: 10.1007/s12539-023-00589-5
    As one of the most common female cancers, cervical cancer often develops years after a prolonged and reversible pre-cancerous stage. Traditional classification algorithms used for detection of cervical cancer often require cell segmentation and feature extraction techniques, while convolutional neural network (CNN) models demand a large dataset to mitigate over-fitting and poor generalization problems. To this end, this study aims to develop deep learning models for automated cervical cancer detection that do not rely on segmentation methods or custom features. Due to limited data availability, transfer learning was employed with pre-trained CNN models to directly operate on Pap smear images for a seven-class classification task. Thorough evaluation and comparison of 13 pre-trained deep CNN models were performed using the publicly available Herlev dataset and the Keras package in Google Collaboratory. In terms of accuracy and performance, DenseNet-201 is the best-performing model. The pre-trained CNN models studied in this paper produced good experimental results and required little computing time.
    Matched MeSH terms: Algorithms
  6. Premkumar R, Srinivasan A, Harini Devi KG, M D, E G, Jadhav P, et al.
    Biosystems, 2024 Mar;237:105142.
    PMID: 38340976 DOI: 10.1016/j.biosystems.2024.105142
    Single-cell analysis (SCA) improves the detection of cancer, the immune system, and chronic diseases from complicated biological processes. SCA techniques generate high-dimensional, innovative, and complex data, making traditional analysis difficult and impractical. In the different cell types, conventional cell sequencing methods have signal transformation and disease detection limitations. To overcome these challenges, various deep learning techniques (DL) have outperformed standard state-of-the-art computer algorithms in SCA techniques. This review discusses DL application in SCA and presents a detailed study on improving SCA data processing and analysis. Firstly, we introduced fundamental concepts and critical points of cell analysis techniques, which illustrate the application of SCA. Secondly, various effective DL strategies apply to SCA to analyze data and provide significant results from complex data sources. Finally, we explored DL as a future direction in SCA and highlighted new challenges and opportunities for the rapidly evolving field of single-cell omics.
    Matched MeSH terms: Algorithms
  7. Teoh YX, Alwan JK, Shah DS, Teh YW, Goh SL
    Clin Biomech (Bristol, Avon), 2024 Mar;113:106188.
    PMID: 38350282 DOI: 10.1016/j.clinbiomech.2024.106188
    BACKGROUND: Despite the existence of evidence-based rehabilitation strategies that address biomechanical deficits, the persistence of recurrent ankle problems in 70% of patients with acute ankle sprains highlights the unresolved nature of this issue. Artificial intelligence (AI) emerges as a promising tool to identify definitive predictors for ankle sprains. This paper aims to summarize the use of AI in investigating the ankle biomechanics of healthy and subjects with ankle sprains.

    METHODS: Articles published between 2010 and 2023 were searched from five electronic databases. 59 papers were included for analysis with regards to: i). types of motion tested (functional vs. purposeful ankle movement); ii) types of biomechanical parameters measured (kinetic vs kinematic); iii) types of sensor systems used (lab-based vs field-based); and, iv) AI techniques used.

    FINDINGS: Most studies (83.1%) examined biomechanics during functional motion. Single kinematic parameter, specifically ankle range of motion, could obtain accuracy up to 100% in identifying injury status. Wearable sensor exhibited high reliability for use in both laboratory and on-field/clinical settings. AI algorithms primarily utilized electromyography and joint angle information as input data. Support vector machine was the most used supervised learning algorithm (18.64%), while artificial neural network demonstrated the highest accuracy in eight studies.

    INTERPRETATIONS: The potential for remote patient monitoring is evident with the adoption of field-based devices. Nevertheless, AI-based sensors are underutilized in detecting ankle motions at risk of sprain. We identify three key challenges: sensor designs, the controllability of AI models, and the integration of AI-sensor models, providing valuable insights for future research.

    Matched MeSH terms: Algorithms
  8. Ngugi HN, Ezugwu AE, Akinyelu AA, Abualigah L
    Environ Monit Assess, 2024 Feb 24;196(3):302.
    PMID: 38401024 DOI: 10.1007/s10661-024-12454-z
    Digital image processing has witnessed a significant transformation, owing to the adoption of deep learning (DL) algorithms, which have proven to be vastly superior to conventional methods for crop detection. These DL algorithms have recently found successful applications across various domains, translating input data, such as images of afflicted plants, into valuable insights, like the identification of specific crop diseases. This innovation has spurred the development of cutting-edge techniques for early detection and diagnosis of crop diseases, leveraging tools such as convolutional neural networks (CNN), K-nearest neighbour (KNN), support vector machines (SVM), and artificial neural networks (ANN). This paper offers an all-encompassing exploration of the contemporary literature on methods for diagnosing, categorizing, and gauging the severity of crop diseases. The review examines the performance analysis of the latest machine learning (ML) and DL techniques outlined in these studies. It also scrutinizes the methodologies and datasets and outlines the prevalent recommendations and identified gaps within different research investigations. As a conclusion, the review offers insights into potential solutions and outlines the direction for future research in this field. The review underscores that while most studies have concentrated on traditional ML algorithms and CNN, there has been a noticeable dearth of focus on emerging DL algorithms like capsule neural networks and vision transformers. Furthermore, it sheds light on the fact that several datasets employed for training and evaluating DL models have been tailored to suit specific crop types, emphasizing the pressing need for a comprehensive and expansive image dataset encompassing a wider array of crop varieties. Moreover, the survey draws attention to the prevailing trend where the majority of research endeavours have concentrated on individual plant diseases, ML, or DL algorithms. In light of this, it advocates for the development of a unified framework that harnesses an ensemble of ML and DL algorithms to address the complexities of multiple plant diseases effectively.
    Matched MeSH terms: Algorithms
  9. Singh RB, Patra KC, Pradhan B, Samantra A
    J Environ Manage, 2024 Feb 14;352:120091.
    PMID: 38228048 DOI: 10.1016/j.jenvman.2024.120091
    Water is a vital resource supporting a broad spectrum of ecosystems and human activities. The quality of river water has declined in recent years due to the discharge of hazardous materials and toxins. Deep learning and machine learning have gained significant attention for analysing time-series data. However, these methods often suffer from high complexity and significant forecasting errors, primarily due to non-linear datasets and hyperparameter settings. To address these challenges, we have developed an innovative HDTO-DeepAR approach for predicting water quality indicators. This proposed approach is compared with standalone algorithms, including DeepAR, BiLSTM, GRU and XGBoost, using performance metrics such as MAE, MSE, MAPE, and NSE. The NSE of the hybrid approach ranges between 0.8 to 0.96. Given the value's proximity to 1, the model appears to be efficient. The PICP values (ranging from 95% to 98%) indicate that the model is highly reliable in forecasting water quality indicators. Experimental results reveal a close resemblance between the model's predictions and actual values, providing valuable insights for predicting future trends. The comparative study shows that the suggested model surpasses all existing, well-known models.
    Matched MeSH terms: Algorithms
  10. Mahmud SMH, Goh KOM, Hosen MF, Nandi D, Shoombuatong W
    Sci Rep, 2024 Feb 05;14(1):2961.
    PMID: 38316843 DOI: 10.1038/s41598-024-52653-9
    DNA-binding proteins (DBPs) play a significant role in all phases of genetic processes, including DNA recombination, repair, and modification. They are often utilized in drug discovery as fundamental elements of steroids, antibiotics, and anticancer drugs. Predicting them poses the most challenging task in proteomics research. Conventional experimental methods for DBP identification are costly and sometimes biased toward prediction. Therefore, developing powerful computational methods that can accurately and rapidly identify DBPs from sequence information is an urgent need. In this study, we propose a novel deep learning-based method called Deep-WET to accurately identify DBPs from primary sequence information. In Deep-WET, we employed three powerful feature encoding schemes containing Global Vectors, Word2Vec, and fastText to encode the protein sequence. Subsequently, these three features were sequentially combined and weighted using the weights obtained from the elements learned through the differential evolution (DE) algorithm. To enhance the predictive performance of Deep-WET, we applied the SHapley Additive exPlanations approach to remove irrelevant features. Finally, the optimal feature subset was input into convolutional neural networks to construct the Deep-WET predictor. Both cross-validation and independent tests indicated that Deep-WET achieved superior predictive performance compared to conventional machine learning classifiers. In addition, in extensive independent test, Deep-WET was effective and outperformed than several state-of-the-art methods for DBP prediction, with accuracy of 78.08%, MCC of 0.559, and AUC of 0.805. This superior performance shows that Deep-WET has a tremendous predictive capacity to predict DBPs. The web server of Deep-WET and curated datasets in this study are available at https://deepwet-dna.monarcatechnical.com/ . The proposed Deep-WET is anticipated to serve the community-wide effort for large-scale identification of potential DBPs.
    Matched MeSH terms: Algorithms
  11. Yu K, Feng L, Chen Y, Wu M, Zhang Y, Zhu P, et al.
    Comput Biol Med, 2024 Feb;169:107835.
    PMID: 38096762 DOI: 10.1016/j.compbiomed.2023.107835
    Current wavelet thresholding methods for cardiogram signals captured by flexible wearable sensors face a challenge in achieving both accurate thresholding and real-time signal denoising. This paper proposes a real-time accurate thresholding method based on signal estimation, specifically the normalized ACF, as an alternative to traditional noise estimation without the need for parameter fine-tuning and extensive data training. This method is experimentally validated using a variety of electrocardiogram (ECG) signals from different databases, each containing specific types of noise such as additive white Gaussian (AWG) noise, baseline wander noise, electrode motion noise, and muscle artifact noise. Although this method only slightly outperforms other methods in removing AWG noise in ECG signals, it far outperforms conventional methods in removing other real noise. This is attributed to the method's ability to accurately distinguish not only AWG noise that is significantly different spectrum of the ECG signal, but also real noise with similar spectra. In contrast, the conventional methods are effective only for AWG noise. In additional, this method improves the denoising visualization of the measured ECG signals and can be used to optimize other parameters of other wavelet methods to enhancing the denoised periodic signals, thereby improving diagnostic accuracy.
    Matched MeSH terms: Algorithms
  12. Xu M, Abdullah NA, Md Sabri AQ
    Comput Biol Chem, 2024 Feb;108:107997.
    PMID: 38154318 DOI: 10.1016/j.compbiolchem.2023.107997
    This work focuses on data sampling in cancer-gene association prediction. Currently, researchers are using machine learning methods to predict genes that are more likely to produce cancer-causing mutations. To improve the performance of machine learning models, methods have been proposed, one of which is to improve the quality of the training data. Existing methods focus mainly on positive data, i.e. cancer driver genes, for screening selection. This paper proposes a low-cancer-related gene screening method based on gene network and graph theory algorithms to improve the negative samples selection. Genetic data with low cancer correlation is used as negative training samples. After experimental verification, using the negative samples screened by this method to train the cancer gene classification model can improve prediction performance. The biggest advantage of this method is that it can be easily combined with other methods that focus on enhancing the quality of positive training samples. It has been demonstrated that significant improvement is achieved by combining this method with three state-of-the-arts cancer gene prediction methods.
    Matched MeSH terms: Algorithms
  13. Asteris PG, Gandomi AH, Armaghani DJ, Tsoukalas MZ, Gavriilaki E, Gerber G, et al.
    J Cell Mol Med, 2024 Feb;28(4):e18105.
    PMID: 38339761 DOI: 10.1111/jcmm.18105
    Complement inhibition has shown promise in various disorders, including COVID-19. A prediction tool including complement genetic variants is vital. This study aims to identify crucial complement-related variants and determine an optimal pattern for accurate disease outcome prediction. Genetic data from 204 COVID-19 patients hospitalized between April 2020 and April 2021 at three referral centres were analysed using an artificial intelligence-based algorithm to predict disease outcome (ICU vs. non-ICU admission). A recently introduced alpha-index identified the 30 most predictive genetic variants. DERGA algorithm, which employs multiple classification algorithms, determined the optimal pattern of these key variants, resulting in 97% accuracy for predicting disease outcome. Individual variations ranged from 40 to 161 variants per patient, with 977 total variants detected. This study demonstrates the utility of alpha-index in ranking a substantial number of genetic variants. This approach enables the implementation of well-established classification algorithms that effectively determine the relevance of genetic variants in predicting outcomes with high accuracy.
    Matched MeSH terms: Algorithms
  14. Tanimu B, Hamed MM, Bello AD, Abdullahi SA, Ajibike MA, Shahid S
    Environ Sci Pollut Res Int, 2024 Feb;31(10):15986-16010.
    PMID: 38308777 DOI: 10.1007/s11356-024-32128-0
    Choosing a suitable gridded climate dataset is a significant challenge in hydro-climatic research, particularly in areas lacking long-term, reliable, and dense records. This study used the most common method (Perkins skill score (PSS)) with two advanced time series similarity algorithms, short time series distance (STS), and cross-correlation distance (CCD), for the first time to evaluate, compare, and rank five gridded climate datasets, namely, Climate Research Unit (CRU), TERRA Climate (TERRA), Climate Prediction Center (CPC), European Reanalysis V.5 (ERA5), and Climatologies at high resolution for Earth's land surface areas (CHELSA), according to their ability to replicate the in situ rainfall and temperature data in Nigeria. The performance of the methods was evaluated by comparing the ranking obtained using compromise programming (CP) based on four statistical criteria in replicating in situ rainfall, maximum temperature, and minimum temperature at 26 locations distributed over Nigeria. Both methods identified CRU as Nigeria's best-gridded climate dataset, followed by CHELSA, TERRA, ERA5, and CPC. The integrated STS values using the group decision-making method for CRU rainfall, maximum and minimum temperatures were 17, 10.1, and 20.8, respectively, while CDD values for those variables were 17.7, 11, and 12.2, respectively. The CP based on conventional statistical metrics supported the results obtained using STS and CCD. CRU's Pbias was between 0.5 and 1; KGE ranged from 0.5 to 0.9; NSE ranged from 0.3 to 0.8; and NRMSE between - 30 and 68.2, which were much better than the other products. The findings establish STS and CCD's ability to evaluate the performance of climate data by avoiding the complex and time-consuming multi-criteria decision algorithms based on multiple statistical metrics.
    Matched MeSH terms: Algorithms*
  15. Azzani M, Azhar ZI, Ruzlin ANM, Wee CX, Samsudin EZ, Al-Harazi SM, et al.
    BMC Cancer, 2024 Jan 05;24(1):40.
    PMID: 38182993 DOI: 10.1186/s12885-023-11814-1
    BACKGROUND: Colorectal cancer (CRC) is the third most common cancer type worldwide. Colorectal cancer treatment costs vary between countries as it depends on policy factors such as treatment algorithms, availability of treatments and whether the treatment is government-funded. Hence, the objective of this systematic review is to determine the prevalence and measurements of financial toxicity (FT), including the cost of treatment, among colorectal cancer patients.

    METHODS: Medline via PubMed platform, Science Direct, Scopus, and CINAHL databases were searched to find studies that examined CRC FT. There was no limit on the design or setting of the study.

    RESULTS: Out of 819 papers identified through an online search, only 15 papers were included in this review. The majority (n = 12, 80%) were from high-income countries, and none from low-income countries. Few studies (n = 2) reported objective FT denoted by the prevalence of catastrophic health expenditure (CHE), 60% (9 out of 15) reported prevalence of subjective FT, which ranges from 7 to 80%, 40% (6 out of 15) included studies reported cost of CRC management- annual direct medical cost ranges from USD 2045 to 10,772 and indirect medical cost ranges from USD 551 to 795.

    CONCLUSIONS: There is a lack of consensus in defining and quantifying financial toxicity hindered the comparability of the results to yield the mean cost of managing CRC. Over and beyond that, information from some low-income countries is missing, limiting global representativeness.

    Matched MeSH terms: Algorithms
  16. Sachithanandan A, Lockman H, Azman RR, Tho LM, Ban EZ, Ramon V
    Med J Malaysia, 2024 Jan;79(1):9-14.
    PMID: 38287751
    INTRODUCTION: The poor prognosis of lung cancer has been largely attributed to the fact that most patients present with advanced stage disease. Although low dose computed tomography (LDCT) is presently considered the optimal imaging modality for lung cancer screening, its use has been hampered by cost and accessibility. One possible approach to facilitate lung cancer screening is to implement a risk-stratification step with chest radiography, given its ease of access and affordability. Furthermore, implementation of artificial-intelligence (AI) in chest radiography is expected to improve the detection of indeterminate pulmonary nodules, which may represent early lung cancer.

    MATERIALS AND METHODS: This consensus statement was formulated by a panel of five experts of primary care and specialist doctors. A lung cancer screening algorithm was proposed for implementation locally.

    RESULTS: In an earlier pilot project collaboration, AI-assisted chest radiography had been incorporated into lung cancer screening in the community. Preliminary experience in the pilot project suggests that the system is easy to use, affordable and scalable. Drawing from experience with the pilot project, a standardised lung cancer screening algorithm using AI in Malaysia was proposed. Requirements for such a screening programme, expected outcomes and limitations of AI-assisted chest radiography were also discussed.

    CONCLUSION: The combined strategy of AI-assisted chest radiography and complementary LDCT imaging has great potential in detecting early-stage lung cancer in a timely manner, and irrespective of risk status. The proposed screening algorithm provides a guide for clinicians in Malaysia to participate in screening efforts.

    Matched MeSH terms: Algorithms
  17. Tukkee AS, Bin Abdul Wahab NI, Binti Mailah NF, Bin Hassan MK
    PLoS One, 2024;19(2):e0298094.
    PMID: 38330067 DOI: 10.1371/journal.pone.0298094
    Recently, global interest in organizing the functioning of renewable energy resources (RES) through microgrids (MG) has developed, as a unique approach to tackle technical, economic, and environmental difficulties. This study proposes implementing a developed Distributable Resource Management strategy (DRMS) in hybrid Microgrid systems to reduce total net percent cost (TNPC), energy loss (Ploss), and gas emissions (GEM) while taking the cost-benefit index (CBI) and loss of power supply probability (LPSP) as operational constraints. Grey Wolf Optimizer (GWO) was utilized to find the optimal size of the hybrid Microgrid components and calculate the multi-objective function with and without the proposed management method. In addition, a detailed sensitivity analysis of numerous economic and technological parameters was performed to assess system performance. The proposed strategy reduced the system's total net present cost, power loss, and emissions by (1.06%), (8.69%), and (17.19%), respectively compared to normal operation. Firefly Algorithm (FA) and Particle Swarm Optimization (PSO) techniques were used to verify the results. This study gives a more detailed plan for evaluating the effectiveness of hybrid Microgrid systems from a technical, economic, and environmental perspective.
    Matched MeSH terms: Algorithms*
  18. Husnain AU, Mokhtar N, Mohamed Shah NB, Dahari MB, Azmi AA, Iwahashi M
    PLoS One, 2024;19(2):e0296969.
    PMID: 38394180 DOI: 10.1371/journal.pone.0296969
    There are three primary objectives of this work; first: to establish a gas concentration map; second: to estimate the point of emission of the gas; and third: to generate a path from any location to the point of emission for UAVs or UGVs. A mountable array of MOX sensors was developed so that the angles and distances among the sensors, alongside sensors data, were utilized to identify the influx of gas plumes. Gas dispersion experiments under indoor conditions were conducted to train machine learning algorithms to collect data at numerous locations and angles. Taguchi's orthogonal arrays for experiment design were used to identify the gas dispersion locations. For the second objective, the data collected after pre-processing was used to train an off-policy, model-free reinforcement learning agent with a Q-learning policy. After finishing the training from the training data set, Q-learning produces a table called the Q-table. The Q-table contains state-action pairs that generate an autonomous path from any point to the source from the testing dataset. The entire process is carried out in an obstacle-free environment, and the whole scheme is designed to be conducted in three modes: search, track, and localize. The hyperparameter combinations of the RL agent were evaluated through trial-and-error technique and it was found that ε = 0.9, γ = 0.9 and α = 0.9 was the fastest path generating combination that took 1258.88 seconds for training and 6.2 milliseconds for path generation. Out of 31 unseen scenarios, the trained RL agent generated successful paths for all the 31 scenarios, however, the UAV was able to reach successfully on the gas source in 23 scenarios, producing a success rate of 74.19%. The results paved the way for using reinforcement learning techniques to be used as autonomous path generation of unmanned systems alongside the need to explore and improve the accuracy of the reported results as future works.
    Matched MeSH terms: Algorithms*
  19. Zafar F, Malik SA, Ali T, Daraz A, Afzal AR, Bhatti F, et al.
    PLoS One, 2024;19(2):e0298624.
    PMID: 38354203 DOI: 10.1371/journal.pone.0298624
    In this paper, we propose two different control strategies for the position control of the ball of the ball and beam system (BBS). The first control strategy uses the proportional integral derivative-second derivative with a proportional integrator PIDD2-PI. The second control strategy uses the tilt integral derivative with filter (TID-F). The designed controllers employ two distinct metaheuristic computation techniques: grey wolf optimization (GWO) and whale optimization algorithm (WOA) for the parameter tuning. We evaluated the dynamic and steady-state performance of the proposed control strategies using four performance indices. In addition, to analyze the robustness of proposed control strategies, a comprehensive comparison has been performed with a variety of controllers, including tilt integral-derivative (TID), fractional order proportional integral derivative (FOPID), integral-proportional derivative (I-PD), proportional integral-derivative (PI-D), and proportional integral proportional derivative (PI-PD). By comparing different test cases, including the variation in the parameters of the BBS with disturbance, we examine step response, set point tracking, disturbance rejection analysis, and robustness of proposed control strategies. The comprehensive comparison of results shows that WOA-PIDD2-PI-ISE and GWO-TID-F- ISE perform superior. Moreover, the proposed control strategies yield oscillation-free, stable, and quick response, which confirms the robustness of the proposed control strategies to the disturbance, parameter variation of BBS, and tracking performance. The practical implementation of the proposed controllers can be in the field of under actuated mechanical systems (UMS), robotics and industrial automation. The proposed control strategies are successfully tested in MATLAB simulation.
    Matched MeSH terms: Algorithms*
  20. T A, G G, P AMD, Assaad M
    PLoS One, 2024;19(3):e0299653.
    PMID: 38478485 DOI: 10.1371/journal.pone.0299653
    Mechanical ventilation techniques are vital for preserving individuals with a serious condition lives in the prolonged hospitalization unit. Nevertheless, an imbalance amid the hospitalized people demands and the respiratory structure could cause to inconsistencies in the patient's inhalation. To tackle this problem, this study presents an Iterative Learning PID Controller (ILC-PID), a unique current cycle feedback type controller that helps in gaining the correct pressure and volume. The paper also offers a clear and complete examination of the primarily efficient neural approach for generating optimal inhalation strategies. Moreover, machine learning-based classifiers are used to evaluate the precision and performance of the ILC-PID controller. These classifiers able to forecast and choose the perfect type for various inhalation modes, eliminating the likelihood that patients will require mechanical ventilation. In pressure control, the suggested accurate neural categorization exhibited an average accuracy rate of 88.2% in continuous positive airway pressure (CPAP) mode and 91.7% in proportional assist ventilation (PAV) mode while comparing with the other classifiers like ensemble classifier has reduced accuracy rate of 69.5% in CPAP mode and also 71.7% in PAV mode. An average accuracy of 78.9% rate in other classifiers compared to neutral network in CPAP. The neural model had an typical range of 81.6% in CPAP mode and 84.59% in PAV mode for 20 cm H2O of volume created by the neural network classifier in the volume investigation. Compared to the other classifiers, an average of 72.17% was in CPAP mode, and 77.83% was in PAV mode in volume control. Different approaches, such as decision trees, optimizable Bayes trees, naive Bayes trees, nearest neighbour trees, and an ensemble of trees, were also evaluated regarding the accuracy by confusion matrix concept, training duration, specificity, sensitivity, and F1 score.
    Matched MeSH terms: Algorithms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links