Displaying publications 221 - 240 of 1460 in total

Abstract:
Sort:
  1. Hamilton MG
    Heredity (Edinb), 2021 06;126(6):884-895.
    PMID: 33692533 DOI: 10.1038/s41437-021-00421-0
    The cost of parentage assignment precludes its application in many selective breeding programmes and molecular ecology studies, and/or limits the circumstances or number of individuals to which it is applied. Pooling samples from more than one individual, and using appropriate genetic markers and algorithms to determine parental contributions to pools, is one means of reducing the cost of parentage assignment. This paper describes and validates a novel maximum likelihood (ML) parentage-assignment method, that can be used to accurately assign parentage to pooled samples of multiple individuals-previously published ML methods are applicable to samples of single individuals only-using low-density single nucleotide polymorphism (SNP) 'quantitative' (also referred to as 'continuous') genotype data. It is demonstrated with simulated data that, when applied to pools, this 'quantitative maximum likelihood' method assigns parentage with greater accuracy than established maximum likelihood parentage-assignment approaches, which rely on accurate discrete genotype calls; exclusion methods; and estimating parental contributions to pools by solving the weighted least squares problem. Quantitative maximum likelihood can be applied to pools generated using either a 'pooling-for-individual-parentage-assignment' approach, whereby each individual in a pool is tagged or traceable and from a known and mutually exclusive set of possible parents; or a 'pooling-by-phenotype' approach, whereby individuals of the same, or similar, phenotype/s are pooled. Although computationally intensive when applied to large pools, quantitative maximum likelihood has the potential to substantially reduce the cost of parentage assignment, even if applied to pools comprised of few individuals.
    Matched MeSH terms: Algorithms*
  2. Chui KT, Gupta BB, Liu RW, Zhang X, Vasant P, Thomas JJ
    Sensors (Basel), 2021 Sep 25;21(19).
    PMID: 34640732 DOI: 10.3390/s21196412
    Road traffic accidents have been listed in the top 10 global causes of death for many decades. Traditional measures such as education and legislation have contributed to limited improvements in terms of reducing accidents due to people driving in undesirable statuses, such as when suffering from stress or drowsiness. Attention is drawn to predicting drivers' future status so that precautions can be taken in advance as effective preventative measures. Common prediction algorithms include recurrent neural networks (RNNs), gated recurrent units (GRUs), and long short-term memory (LSTM) networks. To benefit from the advantages of each algorithm, nondominated sorting genetic algorithm-III (NSGA-III) can be applied to merge the three algorithms. This is named NSGA-III-optimized RNN-GRU-LSTM. An analysis can be made to compare the proposed prediction algorithm with the individual RNN, GRU, and LSTM algorithms. Our proposed model improves the overall accuracy by 11.2-13.6% and 10.2-12.2% in driver stress prediction and driver drowsiness prediction, respectively. Likewise, it improves the overall accuracy by 6.9-12.7% and 6.9-8.9%, respectively, compared with boosting learning with multiple RNNs, multiple GRUs, and multiple LSTMs algorithms. Compared with existing works, this proposal offers to enhance performance by taking some key factors into account-namely, using a real-world driving dataset, a greater sample size, hybrid algorithms, and cross-validation. Future research directions have been suggested for further exploration and performance enhancement.
    Matched MeSH terms: Algorithms*
  3. Alyousifi Y, Othman M, Husin A, Rathnayake U
    Ecotoxicol Environ Saf, 2021 Dec 20;227:112875.
    PMID: 34717219 DOI: 10.1016/j.ecoenv.2021.112875
    Fuzzy time series (FTS) forecasting models show a great performance in predicting time series, such as air pollution time series. However, they have caused major issues by utilizing random partitioning of the universe of discourse and ignoring repeated fuzzy sets. In this study, a novel hybrid forecasting model by integrating fuzzy time series to Markov chain and C-Means clustering techniques with an optimal number of clusters is presented. This hybridization contributes to generating effective lengths of intervals and thus, improving the model accuracy. The proposed model was verified and validated with real time series data sets, which are the benchmark data of actual trading of Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and PM10 concentration data from Melaka, Malaysia. In addition, a comparison was made with some existing fuzzy time series models. Furthermore, the mean absolute percentage error, mean squared error and Theil's U statistic were calculated as evaluation criteria to illustrate the performance of the proposed model. The empirical analysis shows that the proposed model handles the time series data sets more efficiently and provides better overall forecasting results than existing FTS models. The results prove that the proposed model has greatly improved the prediction accuracy, for which it outperforms several fuzzy time series models. Therefore, it can be concluded that the proposed model is a better option for forecasting air pollution parameters and any kind of random parameters.
    Matched MeSH terms: Algorithms*
  4. Herng LC, Singh S, Sundram BM, Zamri ASSM, Vei TC, Aris T, et al.
    Sci Rep, 2022 02 09;12(1):2197.
    PMID: 35140319 DOI: 10.1038/s41598-022-06341-1
    This paper aims to develop an automated web application to generate validated daily effective reproduction numbers (Rt) which can be used to examine the effects of super-spreading events due to mass gatherings and the effectiveness of the various Movement Control Order (MCO) stringency levels on the outbreak progression of COVID-19 in Malaysia. The effective reproduction number, Rt, was estimated by adopting and modifying an Rt estimation algorithm using a validated distribution mean of 3.96 and standard deviation of 4.75 with a seven-day sliding window. The Rt values generated were validated using thea moving window SEIR model with a negative binomial likelihood fitted using methods from the Bayesian inferential framework. A Pearson's correlation between the Rt values estimated by the algorithm and the SEIR model was r = 0.70, p 
    Matched MeSH terms: Algorithms*
  5. Zhang Q, Abdullah AR, Chong CW, Ali MH
    Comput Intell Neurosci, 2022;2022:8235308.
    PMID: 35126503 DOI: 10.1155/2022/8235308
    Gross domestic product (GDP) is an important indicator for determining a country's or region's economic status and development level, and it is closely linked to inflation, unemployment, and economic growth rates. These basic indicators can comprehensively and effectively reflect a country's or region's future economic development. The center of radial basis function neural network and smoothing factor to take a uniform distribution of the random radial basis function artificial neural network will be the focus of this study. This stochastic learning method is a useful addition to the existing methods for determining the center and smoothing factors of radial basis function neural networks, and it can also help the network more efficiently train. GDP forecasting is aided by the genetic algorithm radial basis neural network, which allows the government to make timely and effective macrocontrol plans based on the forecast trend of GDP in the region. This study uses the genetic algorithm radial basis, neural network model, to make judgments on the relationships contained in this sequence and compare and analyze the prediction effect and generalization ability of the model to verify the applicability of the genetic algorithm radial basis, neural network model, based on the modeling of historical data, which may contain linear and nonlinear relationships by itself, so this study uses the genetic algorithm radial basis, neural network model, to make, compare, and analyze judgments on the relationships contained in this sequence.
    Matched MeSH terms: Algorithms*
  6. Bukhari MM, Ghazal TM, Abbas S, Khan MA, Farooq U, Wahbah H, et al.
    Comput Intell Neurosci, 2022;2022:3606068.
    PMID: 35126487 DOI: 10.1155/2022/3606068
    Smart applications and intelligent systems are being developed that are self-reliant, adaptive, and knowledge-based in nature. Emergency and disaster management, aerospace, healthcare, IoT, and mobile applications, among them, revolutionize the world of computing. Applications with a large number of growing devices have transformed the current design of centralized cloud impractical. Despite the use of 5G technology, delay-sensitive applications and cloud cannot go parallel due to exceeding threshold values of certain parameters like latency, bandwidth, response time, etc. Middleware proves to be a better solution to cope up with these issues while satisfying the high requirements task offloading standards. Fog computing is recommended middleware in this research article in view of the fact that it provides the services to the edge of the network; delay-sensitive applications can be entertained effectively. On the contrary, fog nodes contain a limited set of resources that may not process all tasks, especially of computation-intensive applications. Additionally, fog is not the replacement of the cloud, rather supplement to the cloud, both behave like counterparts and offer their services correspondingly to compliance the task needs but fog computing has relatively closer proximity to the devices comparatively cloud. The problem arises when a decision needs to take what is to be offloaded: data, computation, or application, and more specifically where to offload: either fog or cloud and how much to offload. Fog-cloud collaboration is stochastic in terms of task-related attributes like task size, duration, arrival rate, and required resources. Dynamic task offloading becomes crucial in order to utilize the resources at fog and cloud to improve QoS. Since this formation of task offloading policy is a bit complex in nature, this problem is addressed in the research article and proposes an intelligent task offloading model. Simulation results demonstrate the authenticity of the proposed logistic regression model acquiring 86% accuracy compared to other algorithms and confidence in the predictive task offloading policy by making sure process consistency and reliability.
    Matched MeSH terms: Algorithms*
  7. Tengku Hashim TJ, Mohamed A
    PLoS One, 2017;12(10):e0177507.
    PMID: 28991919 DOI: 10.1371/journal.pone.0177507
    The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate.
    Matched MeSH terms: Algorithms*
  8. Boon KH, Khalil-Hani M, Malarvili MB
    Comput Methods Programs Biomed, 2018 Jan;153:171-184.
    PMID: 29157449 DOI: 10.1016/j.cmpb.2017.10.012
    This paper presents a method that able to predict the paroxysmal atrial fibrillation (PAF). The method uses shorter heart rate variability (HRV) signals when compared to existing methods, and achieves good prediction accuracy. PAF is a common cardiac arrhythmia that increases the health risk of a patient, and the development of an accurate predictor of the onset of PAF is clinical important because it increases the possibility to electrically stabilize and prevent the onset of atrial arrhythmias with different pacing techniques. We propose a multi-objective optimization algorithm based on the non-dominated sorting genetic algorithm III for optimizing the baseline PAF prediction system, that consists of the stages of pre-processing, HRV feature extraction, and support vector machine (SVM) model. The pre-processing stage comprises of heart rate correction, interpolation, and signal detrending. After that, time-domain, frequency-domain, non-linear HRV features are extracted from the pre-processed data in feature extraction stage. Then, these features are used as input to the SVM for predicting the PAF event. The proposed optimization algorithm is used to optimize the parameters and settings of various HRV feature extraction algorithms, select the best feature subsets, and tune the SVM parameters simultaneously for maximum prediction performance. The proposed method achieves an accuracy rate of 87.7%, which significantly outperforms most of the previous works. This accuracy rate is achieved even with the HRV signal length being reduced from the typical 30 min to just 5 min (a reduction of 83%). Furthermore, another significant result is the sensitivity rate, which is considered more important that other performance metrics in this paper, can be improved with the trade-off of lower specificity.
    Matched MeSH terms: Algorithms*
  9. Khan SU, Rahim MKA, Aminu-Baba M, Murad NA
    PLoS One, 2017;12(12):e0189240.
    PMID: 29253852 DOI: 10.1371/journal.pone.0189240
    This paper proposes the correction of faulty sensors using a synthesis of the greedy sparse constrained optimization GSCO) technique. The failure of sensors can damage the radiation power pattern in terms of sidelobes and nulls. The synthesis problem can recover the wanted power pattern with reduced number of sensors into the background of greedy algorithm and solved with orthogonal matching pursuit (OMP) technique. Numerical simulation examples of linear arrays are offered to demonstrate the effectiveness of getting the wanted power pattern with a reduced number of antenna sensors which is compared with the available techniques in terms of sidelobes level and number of nulls.
    Matched MeSH terms: Algorithms*
  10. Odili JB, Noraziah A, Alkazemi B, Zarina M
    Sci Rep, 2022 Oct 15;12(1):17319.
    PMID: 36243886 DOI: 10.1038/s41598-022-22242-9
    This paper presents the data description of the African buffalo optimization algorithm (ABO). ABO is a recently-designed optimization algorithm that is inspired by the migrant behaviour of African buffalos in the vast African landscape. Organizing their large herds that could be over a thousand buffalos using just two principal sounds, the /maaa/ and the /waaa/ calls present a good foundation for the development of an optimization algorithm. Since elaborate descriptions of the manual workings of optimization algorithms are rare in literature, this paper aims at solving this problem, hence it is our main contribution. It is our belief that elaborate manual description of the workings of optimization algorithms make it user-friendly and encourage reproducibility of the experimental procedures performed using this algorithm. Again, our ability to describe the algorithm's basic flow, stochastic and data generation processes in a language so simple that any non-expert can appreciate and use as well as the practical implementation of the popular benchmark Rosenbrock and Shekel Foxhole functions with the novel algorithm will assist the research community in benefiting maximally from the contributions of this novel algorithm. Finally, benchmarking the good experimental output of the ABO with those of the popular, highly effective and efficient Cuckoo Search and Flower Pollination Algorithm underscores the ABO as a worthy contribution to the existing body of population-based optimization algorithms.
    Matched MeSH terms: Algorithms*
  11. Barua PD, Baygin N, Dogan S, Baygin M, Arunkumar N, Fujita H, et al.
    Sci Rep, 2022 Oct 14;12(1):17297.
    PMID: 36241674 DOI: 10.1038/s41598-022-21380-4
    Pain intensity classification using facial images is a challenging problem in computer vision research. This work proposed a patch and transfer learning-based model to classify various pain intensities using facial images. The input facial images were segmented into dynamic-sized horizontal patches or "shutter blinds". A lightweight deep network DarkNet19 pre-trained on ImageNet1K was used to generate deep features from the shutter blinds and the undivided resized segmented input facial image. The most discriminative features were selected from these deep features using iterative neighborhood component analysis, which were then fed to a standard shallow fine k-nearest neighbor classifier for classification using tenfold cross-validation. The proposed shutter blinds-based model was trained and tested on datasets derived from two public databases-University of Northern British Columbia-McMaster Shoulder Pain Expression Archive Database and Denver Intensity of Spontaneous Facial Action Database-which both comprised four pain intensity classes that had been labeled by human experts using validated facial action coding system methodology. Our shutter blinds-based classification model attained more than 95% overall accuracy rates on both datasets. The excellent performance suggests that the automated pain intensity classification model can be deployed to assist doctors in the non-verbal detection of pain using facial images in various situations (e.g., non-communicative patients or during surgery). This system can facilitate timely detection and management of pain.
    Matched MeSH terms: Algorithms*
  12. Sulaiman R, Azeman NH, Abu Bakar MH, Ahmad Nazri NA, Masran AS, Ashrif A Bakar A
    Appl Spectrosc, 2023 Feb;77(2):210-219.
    PMID: 36348500 DOI: 10.1177/00037028221140924
    Nutrient solution plays an essential role in providing macronutrients to hydroponic plants. Determining nitrogen in the form of nitrate is crucial, as either a deficient or excessive supply of nitrate ions may reduce the plant yield or lead to environmental pollution. This work aims to evaluate the performance of feature reduction techniques and conventional machine learning (ML) algorithms in determining nitrate concentration levels. Two features reduction techniques, linear discriminant analysis (LDA) and principal component analysis (PCA), and seven ML algorithms, for example, k-nearest neighbors (KNN), support vector machine, decision trees, naïve bayes, random forest (RF), gradient boosting, and extreme gradient boosting, were evaluated using a high-dimensional spectroscopic dataset containing measured nitrate-nitrite mixed solution absorbance data. Despite the limited and uneven number of samples per class, this study demonstrated that PCA outperformed LDA on the high-dimensional spectroscopic dataset. The classification accuracy of ML algorithms combined with PCA ranged from 92.7% to 99.8%, whereas the classification accuracy of ML algorithms combined with LDA ranged from 80.7% to 87.6%. The PCA with the RF algorithm exhibited the best performance with 99.8% accuracy.
    Matched MeSH terms: Algorithms*
  13. Shaikh AK, Nazir A, Khan I, Shah AS
    Sci Rep, 2022 Dec 29;12(1):22562.
    PMID: 36581655 DOI: 10.1038/s41598-022-26499-y
    Smart grids and smart homes are getting people's attention in the modern era of smart cities. The advancements of smart technologies and smart grids have created challenges related to energy efficiency and production according to the future demand of clients. Machine learning, specifically neural network-based methods, remained successful in energy consumption prediction, but still, there are gaps due to uncertainty in the data and limitations of the algorithms. Research published in the literature has used small datasets and profiles of primarily single users; therefore, models have difficulties when applied to large datasets with profiles of different customers. Thus, a smart grid environment requires a model that handles consumption data from thousands of customers. The proposed model enhances the newly introduced method of Neural Basis Expansion Analysis for interpretable Time Series (N-BEATS) with a big dataset of energy consumption of 169 customers. Further, to validate the results of the proposed model, a performance comparison has been carried out with the Long Short Term Memory (LSTM), Blocked LSTM, Gated Recurrent Units (GRU), Blocked GRU and Temporal Convolutional Network (TCN). The proposed interpretable model improves the prediction accuracy on the big dataset containing energy consumption profiles of multiple customers. Incorporating covariates into the model improved accuracy by learning past and future energy consumption patterns. Based on a large dataset, the proposed model performed better for daily, weekly, and monthly energy consumption predictions. The forecasting accuracy of the N-BEATS interpretable model for 1-day-ahead energy consumption with "day as covariates" remained better than the 1, 2, 3, and 4-week scenarios.
    Matched MeSH terms: Algorithms*
  14. Sheikh Khozani Z, Ehteram M, Mohtar WHMW, Achite M, Chau KW
    Environ Sci Pollut Res Int, 2023 Sep;30(44):99362-99379.
    PMID: 37610542 DOI: 10.1007/s11356-023-29406-8
    A wastewater treatment plant (WWTP) is an essential part of the urban water cycle, which reduces concentration of pollutants in the river. For monitoring and control of WWTPs, researchers develop different models and systems. This study introduces a new deep learning model for predicting effluent quality parameters (EQPs) of a WWTP. A method that couples a convolutional neural network (CNN) with a novel version of radial basis function neural network (RBFNN) is proposed to simultaneously predict and estimate uncertainty of data. The multi-kernel RBFNN (MKRBFNN) uses two activation functions to improve the efficiency of the RBFNN model. The salp swarm algorithm is utilized to set the MKRBFNN and CNN parameters. The main advantage of the CNN-MKRBFNN-salp swarm algorithm (SSA) is to automatically extract features from data points. In this study, influent parameters (if) are used as inputs. Biological oxygen demand (BODif), chemical oxygen demand (CODif), total suspended solids (TSSif), volatile suspended solids (VSSif), and sediment (SEDef) are used to predict EQPs, including CODef, BODef, and TSSef. At the testing level, the Nash-Sutcliffe efficiencies of CNN-MKRBFNN-SSA are 0.98, 0.97, and 0.98 for predicting CODef, BODef, and TSSef. Results indicate that the CNN-MKRBFNN-SSA is a robust model for simulating complex phenomena.
    Matched MeSH terms: Algorithms*
  15. Ismail AM, Remli MA, Choon YW, Nasarudin NA, Ismail NN, Ismail MA, et al.
    J Integr Bioinform, 2023 Jun 01;20(2).
    PMID: 37341516 DOI: 10.1515/jib-2022-0051
    Analyzing metabolic pathways in systems biology requires accurate kinetic parameters that represent the simulated in vivo processes. Simulation of the fermentation pathway in the Saccharomyces cerevisiae kinetic model help saves much time in the optimization process. Fitting the simulated model into the experimental data is categorized under the parameter estimation problem. Parameter estimation is conducted to obtain the optimal values for parameters related to the fermentation process. This step is essential because insufficient identification of model parameters can cause erroneous conclusions. The kinetic parameters cannot be measured directly. Therefore, they must be estimated from the experimental data either in vitro or in vivo. Parameter estimation is a challenging task in the biological process due to the complexity and nonlinearity of the model. Therefore, we propose the Artificial Bee Colony algorithm (ABC) to estimate the parameters in the fermentation pathway of S. cerevisiae to obtain more accurate values. A metabolite with a total of six parameters is involved in this article. The experimental results show that ABC outperforms other estimation algorithms and gives more accurate kinetic parameter values for the simulated model. Most of the estimated kinetic parameter values obtained from the proposed algorithm are the closest to the experimental data.
    Matched MeSH terms: Algorithms*
  16. Roslan MF, Al-Shetwi AQ, Hannan MA, Ker PJ, Zuhdi AWM
    PLoS One, 2020;15(12):e0243581.
    PMID: 33362200 DOI: 10.1371/journal.pone.0243581
    The lack of control in voltage overshoot, transient response, and steady state error are major issues that are frequently encountered in a grid-connected photovoltaic (PV) system, resulting in poor power quality performance and damages to the overall power system. This paper presents the performance of a control strategy for an inverter in a three-phase grid-connected PV system. The system consists of a PV panel, a boost converter, a DC link, an inverter, and a resistor-inductor (RL) filter and is connected to the utility grid through a voltage source inverter. The main objective of the proposed strategy is to improve the power quality performance of the three-phase grid-connected inverter system by optimising the proportional-integral (PI) controller. Such a strategy aims to reduce the DC link input voltage fluctuation, decrease the harmonics, and stabilise the output current, voltage, frequency, and power flow. The particle swarm optimisation (PSO) technique was implemented to tune the PI controller parameters by minimising the error of the voltage regulator and current controller schemes in the inverter system. The system model and control strategies were implemented using MATLAB/Simulink environment (Version 2020A) Simscape-Power system toolbox. Results show that the proposed strategy outperformed other reported research works with total harmonic distortion (THD) at a grid voltage and current of 0.29% and 2.72%, respectively, and a transient response time of 0.1853s. Compared to conventional systems, the PI controller with PSO-based optimization provides less voltage overshoot by 11.1% while reducing the time to reach equilibrium state by 32.6%. The consideration of additional input parameters and the optimization of input parameters were identified to be the two main factors that contribute to the significant improvements in power quality control. Therefore, the proposed strategy effectively enhances the power quality of the utility grid, and such an enhancement contributes to the efficient and smooth integration of the PV system.
    Matched MeSH terms: Algorithms*
  17. Tukkee AS, Bin Abdul Wahab NI, Binti Mailah NF, Bin Hassan MK
    PLoS One, 2024;19(2):e0298094.
    PMID: 38330067 DOI: 10.1371/journal.pone.0298094
    Recently, global interest in organizing the functioning of renewable energy resources (RES) through microgrids (MG) has developed, as a unique approach to tackle technical, economic, and environmental difficulties. This study proposes implementing a developed Distributable Resource Management strategy (DRMS) in hybrid Microgrid systems to reduce total net percent cost (TNPC), energy loss (Ploss), and gas emissions (GEM) while taking the cost-benefit index (CBI) and loss of power supply probability (LPSP) as operational constraints. Grey Wolf Optimizer (GWO) was utilized to find the optimal size of the hybrid Microgrid components and calculate the multi-objective function with and without the proposed management method. In addition, a detailed sensitivity analysis of numerous economic and technological parameters was performed to assess system performance. The proposed strategy reduced the system's total net present cost, power loss, and emissions by (1.06%), (8.69%), and (17.19%), respectively compared to normal operation. Firefly Algorithm (FA) and Particle Swarm Optimization (PSO) techniques were used to verify the results. This study gives a more detailed plan for evaluating the effectiveness of hybrid Microgrid systems from a technical, economic, and environmental perspective.
    Matched MeSH terms: Algorithms*
  18. Husnain AU, Mokhtar N, Mohamed Shah NB, Dahari MB, Azmi AA, Iwahashi M
    PLoS One, 2024;19(2):e0296969.
    PMID: 38394180 DOI: 10.1371/journal.pone.0296969
    There are three primary objectives of this work; first: to establish a gas concentration map; second: to estimate the point of emission of the gas; and third: to generate a path from any location to the point of emission for UAVs or UGVs. A mountable array of MOX sensors was developed so that the angles and distances among the sensors, alongside sensors data, were utilized to identify the influx of gas plumes. Gas dispersion experiments under indoor conditions were conducted to train machine learning algorithms to collect data at numerous locations and angles. Taguchi's orthogonal arrays for experiment design were used to identify the gas dispersion locations. For the second objective, the data collected after pre-processing was used to train an off-policy, model-free reinforcement learning agent with a Q-learning policy. After finishing the training from the training data set, Q-learning produces a table called the Q-table. The Q-table contains state-action pairs that generate an autonomous path from any point to the source from the testing dataset. The entire process is carried out in an obstacle-free environment, and the whole scheme is designed to be conducted in three modes: search, track, and localize. The hyperparameter combinations of the RL agent were evaluated through trial-and-error technique and it was found that ε = 0.9, γ = 0.9 and α = 0.9 was the fastest path generating combination that took 1258.88 seconds for training and 6.2 milliseconds for path generation. Out of 31 unseen scenarios, the trained RL agent generated successful paths for all the 31 scenarios, however, the UAV was able to reach successfully on the gas source in 23 scenarios, producing a success rate of 74.19%. The results paved the way for using reinforcement learning techniques to be used as autonomous path generation of unmanned systems alongside the need to explore and improve the accuracy of the reported results as future works.
    Matched MeSH terms: Algorithms*
  19. Khan A, Hizam H, Bin Abdul Wahab NI, Lutfi Othman M
    PLoS One, 2020;15(8):e0235668.
    PMID: 32776932 DOI: 10.1371/journal.pone.0235668
    In this paper, a novel, effective meta-heuristic, population-based Hybrid Firefly Particle Swarm Optimization (HFPSO) algorithm is applied to solve different non-linear and convex optimal power flow (OPF) problems. The HFPSO algorithm is a hybridization of the Firefly Optimization (FFO) and the Particle Swarm Optimization (PSO) technique, to enhance the exploration, exploitation strategies, and to speed up the convergence rate. In this work, five objective functions of OPF problems are studied to prove the strength of the proposed method: total generation cost minimization, voltage profile improvement, voltage stability enhancement, the transmission lines active power loss reductions, and the transmission lines reactive power loss reductions. The particular fitness function is chosen as a single objective based on control parameters. The proposed HFPSO technique is coded using MATLAB software and its effectiveness is tested on the standard IEEE 30-bus test system. The obtained results of the proposed algorithm are compared to simulated results of the original Particle Swarm Optimization (PSO) method and the present state-of-the-art optimization techniques. The comparison of optimum solutions reveals that the recommended method can generate optimum, feasible, global solutions with fast convergence and can also deal with the challenges and complexities of various OPF problems.
    Matched MeSH terms: Algorithms*
  20. Liew KJ, Ramli A, Abd Majid A
    PLoS One, 2016;11(6):e0156724.
    PMID: 27315105 DOI: 10.1371/journal.pone.0156724
    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
    Matched MeSH terms: Algorithms*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links