Displaying publications 121 - 140 of 1459 in total

Abstract:
Sort:
  1. Saad MA, Jaafar R, Chellappan K
    Sensors (Basel), 2023 Jun 12;23(12).
    PMID: 37420692 DOI: 10.3390/s23125526
    Data gathering in wireless sensor networks (WSNs) is vital for deploying and enabling WSNs with the Internet of Things (IoTs). In various applications, the network is deployed in a large-scale area, which affects the efficiency of the data collection, and the network is subject to multiple attacks that impact the reliability of the collected data. Hence, data collection should consider trust in sources and routing nodes. This makes trust an additional optimization objective of the data gathering in addition to energy consumption, traveling time, and cost. Joint optimization of the goals requires conducting multiobjective optimization. This article proposes a modified social class multiobjective particle swarm optimization (SC-MOPSO) method. The modified SC-MOPSO method is featured by application-dependent operators named interclass operators. In addition, it includes solution generation, adding and deleting rendezvous points, and moving to the upper and lower class. Considering that SC-MOPSO provides a set of nondominated solutions as a Pareto front, we employed one of the multicriteria decision-making (MCDM) methods, i.e., simple additive sum (SAW), for selecting one of the solutions from the Pareto front. The results show that both SC-MOPSO and SAW are superior in terms of domination. The set coverage of SC-MOPSO is 0.06 dominant over NSGA-II compared with only a mastery of 0.04 of NSGA-II over SC-MOPSO. At the same time, it showed competitive performance with NSGA-III.
    Matched MeSH terms: Algorithms*
  2. Asim Shahid M, Alam MM, Mohd Su'ud M
    PLoS One, 2023;18(4):e0284209.
    PMID: 37053173 DOI: 10.1371/journal.pone.0284209
    The benefits and opportunities offered by cloud computing are among the fastest-growing technologies in the computer industry. Additionally, it addresses the difficulties and issues that make more users more likely to accept and use the technology. The proposed research comprised of machine learning (ML) algorithms is Naïve Bayes (NB), Library Support Vector Machine (LibSVM), Multinomial Logistic Regression (MLR), Sequential Minimal Optimization (SMO), K Nearest Neighbor (KNN), and Random Forest (RF) to compare the classifier gives better results in accuracy and less fault prediction. In this research, the secondary data results (CPU-Mem Mono) give the highest percentage of accuracy and less fault prediction on the NB classifier in terms of 80/20 (77.01%), 70/30 (76.05%), and 5 folds cross-validation (74.88%), and (CPU-Mem Multi) in terms of 80/20 (89.72%), 70/30 (90.28%), and 5 folds cross-validation (92.83%). Furthermore, on (HDD Mono) the SMO classifier gives the highest percentage of accuracy and less fault prediction fault in terms of 80/20 (87.72%), 70/30 (89.41%), and 5 folds cross-validation (88.38%), and (HDD-Multi) in terms of 80/20 (93.64%), 70/30 (90.91%), and 5 folds cross-validation (88.20%). Whereas, primary data results found RF classifier gives the highest percentage of accuracy and less fault prediction in terms of 80/20 (97.14%), 70/30 (96.19%), and 5 folds cross-validation (95.85%) in the primary data results, but the algorithm complexity (0.17 seconds) is not good. In terms of 80/20 (95.71%), 70/30 (95.71%), and 5 folds cross-validation (95.71%), SMO has the second highest accuracy and less fault prediction, but the algorithm complexity is good (0.3 seconds). The difference in accuracy and less fault prediction between RF and SMO is only (.13%), and the difference in time complexity is (14 seconds). We have decided that we will modify SMO. Finally, the Modified Sequential Minimal Optimization (MSMO) Algorithm method has been proposed to get the highest accuracy & less fault prediction errors in terms of 80/20 (96.42%), 70/30 (96.42%), & 5 fold cross validation (96.50%).
    Matched MeSH terms: Algorithms*
  3. Devan PAM, Ibrahim R, Omar M, Bingi K, Abdulrab H
    Sensors (Basel), 2023 Jul 07;23(13).
    PMID: 37448072 DOI: 10.3390/s23136224
    A novel hybrid Harris Hawk-Arithmetic Optimization Algorithm (HHAOA) for optimizing the Industrial Wireless Mesh Networks (WMNs) and real-time pressure process control was proposed in this research article. The proposed algorithm uses inspiration from Harris Hawk Optimization and the Arithmetic Optimization Algorithm to improve position relocation problems, premature convergence, and the poor accuracy the existing techniques face. The HHAOA algorithm was evaluated on various benchmark functions and compared with other optimization algorithms, namely Arithmetic Optimization Algorithm, Moth Flame Optimization, Sine Cosine Algorithm, Grey Wolf Optimization, and Harris Hawk Optimization. The proposed algorithm was also applied to a real-world industrial wireless mesh network simulation and experimentation on the real-time pressure process control system. All the results demonstrate that the HHAOA algorithm outperforms different algorithms regarding mean, standard deviation, convergence speed, accuracy, and robustness and improves client router connectivity and network congestion with a 31.7% reduction in Wireless Mesh Network routers. In the real-time pressure process, the HHAOA optimized Fractional-order Predictive PI (FOPPI) Controller produced a robust and smoother control signal leading to minimal peak overshoot and an average of a 53.244% faster settling. Based on the results, the algorithm enhanced the efficiency and reliability of industrial wireless networks and real-time pressure process control systems, which are critical for industrial automation and control applications.
    Matched MeSH terms: Algorithms*
  4. Khoh WH, Pang YH, Yap HY
    F1000Res, 2022;11:283.
    PMID: 37600220 DOI: 10.12688/f1000research.74134.2
    Background: With the advances in current technology, hand gesture recognition has gained considerable attention. It has been extended to recognize more distinctive movements, such as a signature, in human-computer interaction (HCI) which enables the computer to identify a person in a non-contact acquisition environment. This application is known as in-air hand gesture signature recognition. To our knowledge, there are no publicly accessible databases and no detailed descriptions of the acquisitional protocol in this domain. Methods: This paper aims to demonstrate the procedure for collecting the in-air hand gesture signature's database. This database is disseminated as a reference database in the relevant field for evaluation purposes. The database is constructed from the signatures of 100 volunteer participants, who contributed their signatures in two different sessions. Each session provided 10 genuine samples enrolled using a Microsoft Kinect sensor camera to generate a genuine dataset. In addition, a forgery dataset was also collected by imitating the genuine samples. For evaluation, each sample was preprocessed with hand localization and predictive hand segmentation algorithms to extract the hand region. Then, several vector-based features were extracted. Results: In this work, classification performance analysis and system robustness analysis were carried out. In the classification analysis, a multiclass Support Vector Machine (SVM) was employed to classify the samples and 97.43% accuracy was achieved; while the system robustness analysis demonstrated low error rates of 2.41% and 5.07% in random forgery and skilled forgery attacks, respectively. Conclusions: These findings indicate that hand gesture signature is not only feasible for human classification, but its properties are also robust against forgery attacks.
    Matched MeSH terms: Algorithms*
  5. Mohd Faizal AS, Hon WY, Thevarajah TM, Khor SM, Chang SW
    Med Biol Eng Comput, 2023 Oct;61(10):2527-2541.
    PMID: 37199891 DOI: 10.1007/s11517-023-02841-y
    Acute myocardial infarction (AMI) or heart attack is a significant global health threat and one of the leading causes of death. The evolution of machine learning has greatly revamped the risk stratification and death prediction of AMI. In this study, an integrated feature selection and machine learning approach was used to identify potential biomarkers for early detection and treatment of AMI. First, feature selection was conducted and evaluated before all classification tasks with machine learning. Full classification models (using all 62 features) and reduced classification models (using various feature selection methods ranging from 5 to 30 features) were built and evaluated using six machine learning classification algorithms. The results showed that the reduced models performed generally better (mean AUPRC via random forest (RF) algorithm for recursive feature elimination (RFE) method ranges from 0.8048 to 0.8260, while for random forest importance (RFI) method, it ranges from 0.8301 to 0.8505) than the full models (mean AUPRC via RF: 0.8044). The most notable finding of this study was the identification of a five-feature model that included cardiac troponin I, HDL cholesterol, HbA1c, anion gap, and albumin, which had achieved comparable results (mean AUPRC via RF: 0.8462) as to the models that containing more features. These five features were proven by the previous studies as significant risk factors for AMI or cardiovascular disease and could be used as potential biomarkers to predict the prognosis of AMI patients. From the medical point of view, fewer features for diagnosis or prognosis could reduce the cost and time of a patient as lesser clinical and pathological tests are needed.
    Matched MeSH terms: Algorithms*
  6. Al-Hameli BA, Alsewari AA, Basurra SS, Bhogal J, Ali MAH
    J Integr Bioinform, 2023 Mar 01;20(1).
    PMID: 36810102 DOI: 10.1515/jib-2021-0037
    Diagnosing diabetes early is critical as it helps patients live with the disease in a healthy way - through healthy eating, taking appropriate medical doses, and making patients more vigilant in their movements/activities to avoid wounds that are difficult to heal for diabetic patients. Data mining techniques are typically used to detect diabetes with high confidence to avoid misdiagnoses with other chronic diseases whose symptoms are similar to diabetes. Hidden Naïve Bayes is one of the algorithms for classification, which works under a data-mining model based on the assumption of conditional independence of the traditional Naïve Bayes. The results from this research study, which was conducted on the Pima Indian Diabetes (PID) dataset collection, show that the prediction accuracy of the HNB classifier achieved 82%. As a result, the discretization method increases the performance and accuracy of the HNB classifier.
    Matched MeSH terms: Algorithms*
  7. Lee CS, Abd Shukor SR
    Environ Sci Pollut Res Int, 2023 Dec;30(60):124790-124805.
    PMID: 36961637 DOI: 10.1007/s11356-023-26358-x
    The controllable intensified process has received immense attention from researchers in order to deliver the benefit of process intensification to be operated in a desired way to provide a more sustainable process toward reduction of environmental impact and improvement of intrinsic safety and process efficiency. Despite numerous studies on gain and phase margin approach on conventional process systems, it is yet to be tested on intensified systems as evidenced by the lack of available literature, to improve the controller performance and robustness. Thus, this paper proposed the exact gain and phase margin (EGPM) through analytical method to develop suitable controller design for intensified system using Proportional-Integral-Derivative (PID) controller formulation, and it was compared to conventional Direct Synthesis methods (DS), Internal Model Control (IMC), and Industrial IMC method in terms of the performance and stability analysis. Simulation results showed that EGPM method provides good setpoint tracking and disturbance rejection as compared to DS, IMC, and Industrial IMC while retaining overall performance stability as time delay increases. The Bode Stability Criterion was used to determine the stability of the open-loop transfer function of each method and the result demonstrated decrease in stability as time delay increases for controllers designed using DS, IMC, and Industrial IMC, and hence control performance degrades. However, the proposed EGPM controller maintains the overall robustness and control performance throughout the increase of time delay and outperform other controller design methods at higher time delay with [Formula: see text] uncertainty test. Additionally, the proposed EGPM controller design method provides overall superior control performance with lower overshoot and shorter rise time compared to other controllers when process time constant is smaller in magnitude ([Formula: see text]) than the instrumentation element, which is one of the major concerns during the design of intensified controllers, resulting overall system with a higher order. The desired selection of gain margin and phase margin were suggested at 2.5 to 4 and 60 °-70 [Formula: see text], respectively, for a wide range of control conditions for intensified processes where higher instrumentation dynamic would be possible to achieve robust control as well. The proposed EGPM method controller is thought to be a more reliable design strategy for maintaining the overall robustness and performance of higher order and complex systems that are highly affected by time delay and high dynamic response of intensified processes.
    Matched MeSH terms: Algorithms*
  8. Zhang K, Ting HN, Choo YM
    Comput Methods Programs Biomed, 2024 Mar;245:108043.
    PMID: 38306944 DOI: 10.1016/j.cmpb.2024.108043
    BACKGROUND AND OBJECTIVE: Conflict may happen when more than one classifier is used to perform prediction or classification. The recognition model error leads to conflicting evidence. These conflicts can cause decision errors in a baby cry recognition and further decrease its recognition accuracy. Thus, the objective of this study is to propose a method that can effectively minimize the conflict among deep learning models and improve the accuracy of baby cry recognition.

    METHODS: An improved Dempster-Shafer evidence theory (DST) based on Wasserstein distance and Deng entropy was proposed to reduce the conflicts among the results by combining the credibility degree between evidence and the uncertainty degree of evidence. To validate the effectiveness of the proposed method, examples were analyzed, and applied in a baby cry recognition. The Whale optimization algorithm-Variational mode decomposition (WOA-VMD) was used to optimally decompose the baby cry signals. The deep features of decomposed components were extracted using the VGG16 model. Long Short-Term Memory (LSTM) models were used to classify baby cry signals. An improved DST decision method was used to obtain the decision fusion.

    RESULTS: The proposed fusion method achieves an accuracy of 90.15% in classifying three types of baby cry. Improvement between 2.90% and 4.98% was obtained over the existing DST fusion methods. Recognition accuracy was improved by between 5.79% and 11.53% when compared to the latest methods used in baby cry recognition.

    CONCLUSION: The proposed method optimally decomposes baby cry signal, effectively reduces the conflict among the results of deep learning models and improves the accuracy of baby cry recognition.

    Matched MeSH terms: Algorithms*
  9. AlThuwaynee OF, Kim SW, Najemaden MA, Aydda A, Balogun AL, Fayyadh MM, et al.
    Environ Sci Pollut Res Int, 2021 Aug;28(32):43544-43566.
    PMID: 33834339 DOI: 10.1007/s11356-021-13255-4
    This study investigates uncertainty in machine learning that can occur when there is significant variance in the prediction importance level of the independent variables, especially when the ROC fails to reflect the unbalanced effect of prediction variables. A variable drop-off loop function, based on the concept of early termination for reduction of model capacity, regularization, and generalization control, was tested. A susceptibility index for airborne particulate matter of less than 10 μm diameter (PM10) was modeled using monthly maximum values and spectral bands and indices from Landsat 8 imagery, and Open Street Maps were used to prepare a range of independent variables. Probability and classification index maps were prepared using extreme-gradient boosting (XGBOOST) and random forest (RF) algorithms. These were assessed against utility criteria such as a confusion matrix of overall accuracy, quantity of variables, processing delay, degree of overfitting, importance distribution, and area under the receiver operating characteristic curve (ROC).
    Matched MeSH terms: Algorithms*
  10. Zafar F, Malik SA, Ali T, Daraz A, Afzal AR, Bhatti F, et al.
    PLoS One, 2024;19(2):e0298624.
    PMID: 38354203 DOI: 10.1371/journal.pone.0298624
    In this paper, we propose two different control strategies for the position control of the ball of the ball and beam system (BBS). The first control strategy uses the proportional integral derivative-second derivative with a proportional integrator PIDD2-PI. The second control strategy uses the tilt integral derivative with filter (TID-F). The designed controllers employ two distinct metaheuristic computation techniques: grey wolf optimization (GWO) and whale optimization algorithm (WOA) for the parameter tuning. We evaluated the dynamic and steady-state performance of the proposed control strategies using four performance indices. In addition, to analyze the robustness of proposed control strategies, a comprehensive comparison has been performed with a variety of controllers, including tilt integral-derivative (TID), fractional order proportional integral derivative (FOPID), integral-proportional derivative (I-PD), proportional integral-derivative (PI-D), and proportional integral proportional derivative (PI-PD). By comparing different test cases, including the variation in the parameters of the BBS with disturbance, we examine step response, set point tracking, disturbance rejection analysis, and robustness of proposed control strategies. The comprehensive comparison of results shows that WOA-PIDD2-PI-ISE and GWO-TID-F- ISE perform superior. Moreover, the proposed control strategies yield oscillation-free, stable, and quick response, which confirms the robustness of the proposed control strategies to the disturbance, parameter variation of BBS, and tracking performance. The practical implementation of the proposed controllers can be in the field of under actuated mechanical systems (UMS), robotics and industrial automation. The proposed control strategies are successfully tested in MATLAB simulation.
    Matched MeSH terms: Algorithms*
  11. Ismail AM, Ab Hamid SH, Abdul Sani A, Mohd Daud NN
    PLoS One, 2024;19(4):e0299585.
    PMID: 38603718 DOI: 10.1371/journal.pone.0299585
    The performance of the defect prediction model by using balanced and imbalanced datasets makes a big impact on the discovery of future defects. Current resampling techniques only address the imbalanced datasets without taking into consideration redundancy and noise inherent to the imbalanced datasets. To address the imbalance issue, we propose Kernel Crossover Oversampling (KCO), an oversampling technique based on kernel analysis and crossover interpolation. Specifically, the proposed technique aims to generate balanced datasets by increasing data diversity in order to reduce redundancy and noise. KCO first represents multidimensional features into two-dimensional features by employing Kernel Principal Component Analysis (KPCA). KCO then divides the plotted data distribution by deploying spectral clustering to select the best region for interpolation. Lastly, KCO generates the new defect data by interpolating different data templates within the selected data clusters. According to the prediction evaluation conducted, KCO consistently produced F-scores ranging from 21% to 63% across six datasets, on average. According to the experimental results presented in this study, KCO provides more effective prediction performance than other baseline techniques. The experimental results show that KCO within project and cross project predictions especially consistently achieve higher performance of F-score results.
    Matched MeSH terms: Algorithms*
  12. Ibrahim S, Abdul Wahab N
    Water Sci Technol, 2024 Apr;89(7):1701-1724.
    PMID: 38619898 DOI: 10.2166/wst.2024.099
    Hyperparameter tuning is an important process to maximize the performance of any neural network model. This present study proposed the factorial design of experiment for screening and response surface methodology to optimize the hyperparameter of two artificial neural network algorithms. Feed-forward neural network (FFNN) and radial basis function neural network (RBFNN) are applied to predict the permeate flux of palm oil mill effluent. Permeate pump and transmembrane pressure of the submerge membrane bioreactor system are the input variables. Six hyperparameters of the FFNN model including four numerical factors (neuron numbers, learning rate, momentum, and epoch numbers) and two categorical factors (training and activation function) are used in hyperparameter optimization. RBFNN includes two numerical factors such as a number of neurons and spreads. The conventional method (one-variable-at-a-time) is compared in terms of optimization processing time and the accuracy of the model. The result indicates that the optimal hyperparameters obtained by the proposed approach produce good accuracy with a smaller generalization error. The simulation results show an improvement of more than 65% of training performance, with less repetition and processing time. This proposed methodology can be utilized for any type of neural network application to find the optimum levels of different parameters.
    Matched MeSH terms: Algorithms*
  13. Hussein AA, Rahman TA, Leow CY
    Sensors (Basel), 2015;15(12):30545-70.
    PMID: 26690159 DOI: 10.3390/s151229817
    Localization is an apparent aspect of a wireless sensor network, which is the focus of much interesting research. One of the severe conditions that needs to be taken into consideration is localizing a mobile target through a dispersed sensor network in the presence of physical barrier attacks. These attacks confuse the localization process and cause location estimation errors. Range-based methods, like the received signal strength indication (RSSI), face the major influence of this kind of attack. This paper proposes a solution based on a combination of multi-frequency multi-power localization (C-MFMPL) and step function multi-frequency multi-power localization (SF-MFMPL), including the fingerprint matching technique and lateration, to provide a robust and accurate localization technique. In addition, this paper proposes a grid coloring algorithm to detect the signal hole map in the network, which refers to the attack-prone regions, in order to carry out corrective actions. The simulation results show the enhancement and robustness of RSS localization performance in the face of log normal shadow fading effects, besides the presence of physical barrier attacks, through detecting, filtering and eliminating the effect of these attacks.
    Matched MeSH terms: Algorithms
  14. Dabbagh M, Lee SP
    ScientificWorldJournal, 2014;2014:737626.
    PMID: 24982987 DOI: 10.1155/2014/737626
    Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches.
    Matched MeSH terms: Algorithms
  15. Nikuie M, Ahmad MZ
    ScientificWorldJournal, 2014;2014:517218.
    PMID: 24737977 DOI: 10.1155/2014/517218
    In this paper, the singular LR fuzzy linear system is introduced. Such systems are divided into two parts: singular consistent LR fuzzy linear systems and singular inconsistent LR fuzzy linear systems. The capability of the generalized inverses such as Drazin inverse, pseudoinverse, and {1}-inverse in finding minimal solution of singular consistent LR fuzzy linear systems is investigated.
    Matched MeSH terms: Algorithms
  16. Saeed F, Salim N, Abdo A
    Int J Comput Biol Drug Des, 2014 01 09;7(1):31-44.
    PMID: 24429501 DOI: 10.1504/IJCBDD.2014.058584
    Many types of clustering techniques for chemical structures have been used in the literature, but it is known that any single method will not always give the best results for all types of applications. Recent work on consensus clustering methods is motivated because of the successes of combining multiple classifiers in many areas and the ability of consensus clustering to improve the robustness, novelty, consistency and stability of individual clusterings. In this paper, the Cluster-based Similarity Partitioning Algorithm (CSPA) was examined for improving the quality of chemical structures clustering. The effectiveness of clustering was evaluated based on the ability to separate active from inactive molecules in each cluster and the results were compared with the Ward's clustering method. The chemical dataset MDL Drug Data Report (MDDR) database was used for experiments. The results, obtained by combining multiple clusterings, showed that the consensus clustering method can improve the robustness, novelty and stability of chemical structures clustering.
    Matched MeSH terms: Algorithms
  17. Ahmad MZ, Hasan MK, Abbasbandy S
    ScientificWorldJournal, 2013;2013:454969.
    PMID: 24082853 DOI: 10.1155/2013/454969
    We study a fuzzy fractional differential equation (FFDE) and present its solution using Zadeh's extension principle. The proposed study extends the case of fuzzy differential equations of integer order. We also propose a numerical method to approximate the solution of FFDEs. To solve nonlinear problems, the proposed numerical method is then incorporated into an unconstrained optimisation technique. Several numerical examples are provided.
    Matched MeSH terms: Algorithms
  18. Chong KK, Wong CW, Siaw FL, Yew TK, Ng SS, Liang MS, et al.
    Sensors (Basel), 2009;9(10):7849-65.
    PMID: 22408483 DOI: 10.3390/s91007849
    A novel on-axis general sun-tracking formula has been integrated in the algorithm of an open-loop sun-tracking system in order to track the sun accurately and cost effectively. Sun-tracking errors due to installation defects of the 25 m(2) prototype solar concentrator have been analyzed from recorded solar images with the use of a CCD camera. With the recorded data, misaligned angles from ideal azimuth-elevation axes have been determined and corrected by a straightforward changing of the parameters' values in the general formula of the tracking algorithm to improve the tracking accuracy to 2.99 mrad, which falls below the encoder resolution limit of 4.13 mrad.
    Matched MeSH terms: Algorithms
  19. Sim KS, Wee MY, Lim WK
    Microsc Res Tech, 2008 Oct;71(10):710-20.
    PMID: 18615490 DOI: 10.1002/jemt.20610
    We propose to cascade the Shape-Preserving Piecewise Cubic Hermite model with the Autoregressive Moving Average (ARMA) interpolator; we call this technique the Shape-Preserving Piecewise Cubic Hermite Autoregressive Moving Average (SP2CHARMA) model. In a few test cases involving different images, this model is found to deliver an optimum solution for signal to noise ratio (SNR) estimation problems under different noise environments. The performance of the proposed estimator is compared with two existing methods: the autoregressive-based and autoregressive moving average estimators. Being more robust with noise, the SP2CHARMA estimator has efficiency that is significantly greater than those of the two methods.
    Matched MeSH terms: Algorithms
  20. Golkar E, Prabuwono AS, Patel A
    Sensors (Basel), 2012;12(11):14774-91.
    PMID: 23202186 DOI: 10.3390/s121114774
    This paper presents a novel, real-time defect detection system, based on a best-fit polynomial interpolation, that inspects the conditions of outer surfaces. The defect detection system is an enhanced feature extraction method that employs this technique to inspect the flatness, waviness, blob, and curvature faults of these surfaces. The proposed method has been performed, tested, and validated on numerous pipes and ceramic tiles. The results illustrate that the physical defects such as abnormal, popped-up blobs are recognized completely, and that flames, waviness, and curvature faults are detected simultaneously.
    Matched MeSH terms: Algorithms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links