Displaying publications 81 - 100 of 1459 in total

Abstract:
Sort:
  1. Yildirim O, Baloglu UB, Tan RS, Ciaccio EJ, Acharya UR
    Comput Methods Programs Biomed, 2019 Jul;176:121-133.
    PMID: 31200900 DOI: 10.1016/j.cmpb.2019.05.004
    BACKGROUND AND OBJECTIVE: For diagnosis of arrhythmic heart problems, electrocardiogram (ECG) signals should be recorded and monitored. The long-term signal records obtained are analyzed by expert cardiologists. Devices such as the Holter monitor have limited hardware capabilities. For improved diagnostic capacity, it would be helpful to detect arrhythmic signals automatically. In this study, a novel approach is presented as a candidate solution for these issues.

    METHODS: A convolutional auto-encoder (CAE) based nonlinear compression structure is implemented to reduce the signal size of arrhythmic beats. Long-short term memory (LSTM) classifiers are employed to automatically recognize arrhythmias using ECG features, which are deeply coded with the CAE network.

    RESULTS: Based upon the coded ECG signals, both storage requirement and classification time were considerably reduced. In experimental studies conducted with the MIT-BIH arrhythmia database, ECG signals were compressed by an average 0.70% percentage root mean square difference (PRD) rate, and an accuracy of over 99.0% was observed.

    CONCLUSIONS: One of the significant contributions of this study is that the proposed approach can significantly reduce time duration when using LSTM networks for data analysis. Thus, a novel and effective approach was proposed for both ECG signal compression, and their high-performance automatic recognition, with very low computational cost.

    Matched MeSH terms: Algorithms
  2. Alizadehsani R, Abdar M, Roshanzamir M, Khosravi A, Kebria PM, Khozeimeh F, et al.
    Comput Biol Med, 2019 08;111:103346.
    PMID: 31288140 DOI: 10.1016/j.compbiomed.2019.103346
    Coronary artery disease (CAD) is the most common cardiovascular disease (CVD) and often leads to a heart attack. It annually causes millions of deaths and billions of dollars in financial losses worldwide. Angiography, which is invasive and risky, is the standard procedure for diagnosing CAD. Alternatively, machine learning (ML) techniques have been widely used in the literature as fast, affordable, and noninvasive approaches for CAD detection. The results that have been published on ML-based CAD diagnosis differ substantially in terms of the analyzed datasets, sample sizes, features, location of data collection, performance metrics, and applied ML techniques. Due to these fundamental differences, achievements in the literature cannot be generalized. This paper conducts a comprehensive and multifaceted review of all relevant studies that were published between 1992 and 2019 for ML-based CAD diagnosis. The impacts of various factors, such as dataset characteristics (geographical location, sample size, features, and the stenosis of each coronary artery) and applied ML techniques (feature selection, performance metrics, and method) are investigated in detail. Finally, the important challenges and shortcomings of ML-based CAD diagnosis are discussed.
    Matched MeSH terms: Algorithms
  3. Koh JEW, Ng EYK, Bhandary SV, Hagiwara Y, Laude A, Acharya UR
    Comput Biol Med, 2018 01 01;92:204-209.
    PMID: 29227822 DOI: 10.1016/j.compbiomed.2017.11.019
    Untreated age-related macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma may lead to irreversible vision loss. Hence, it is essential to have regular eye screening to detect these eye diseases at an early stage and to offer treatment where appropriate. One of the simplest, non-invasive and cost-effective techniques to screen the eyes is by using fundus photo imaging. But, the manual evaluation of fundus images is tedious and challenging. Further, the diagnosis made by ophthalmologists may be subjective. Therefore, an objective and novel algorithm using the pyramid histogram of visual words (PHOW) and Fisher vectors is proposed for the classification of fundus images into their respective eye conditions (normal, AMD, DR, and glaucoma). The proposed algorithm extracts features which are represented as words. These features are built and encoded into a Fisher vector for classification using random forest classifier. This proposed algorithm is validated with both blindfold and ten-fold cross-validation techniques. An accuracy of 90.06% is achieved with the blindfold method, and highest accuracy of 96.79% is obtained with ten-fold cross-validation. The highest classification performance of our system shows the potential of deploying it in polyclinics to assist healthcare professionals in their initial diagnosis of the eye. Our developed system can reduce the workload of ophthalmologists significantly.
    Matched MeSH terms: Algorithms
  4. Barua PD, Baygin N, Dogan S, Baygin M, Arunkumar N, Fujita H, et al.
    Sci Rep, 2022 Oct 14;12(1):17297.
    PMID: 36241674 DOI: 10.1038/s41598-022-21380-4
    Pain intensity classification using facial images is a challenging problem in computer vision research. This work proposed a patch and transfer learning-based model to classify various pain intensities using facial images. The input facial images were segmented into dynamic-sized horizontal patches or "shutter blinds". A lightweight deep network DarkNet19 pre-trained on ImageNet1K was used to generate deep features from the shutter blinds and the undivided resized segmented input facial image. The most discriminative features were selected from these deep features using iterative neighborhood component analysis, which were then fed to a standard shallow fine k-nearest neighbor classifier for classification using tenfold cross-validation. The proposed shutter blinds-based model was trained and tested on datasets derived from two public databases-University of Northern British Columbia-McMaster Shoulder Pain Expression Archive Database and Denver Intensity of Spontaneous Facial Action Database-which both comprised four pain intensity classes that had been labeled by human experts using validated facial action coding system methodology. Our shutter blinds-based classification model attained more than 95% overall accuracy rates on both datasets. The excellent performance suggests that the automated pain intensity classification model can be deployed to assist doctors in the non-verbal detection of pain using facial images in various situations (e.g., non-communicative patients or during surgery). This system can facilitate timely detection and management of pain.
    Matched MeSH terms: Algorithms*
  5. Erten M, Tuncer I, Barua PD, Yildirim K, Dogan S, Tuncer T, et al.
    J Digit Imaging, 2023 Aug;36(4):1675-1686.
    PMID: 37131063 DOI: 10.1007/s10278-023-00827-8
    Microscopic examination of urinary sediments is a common laboratory procedure. Automated image-based classification of urinary sediments can reduce analysis time and costs. Inspired by cryptographic mixing protocols and computer vision, we developed an image classification model that combines a novel Arnold Cat Map (ACM)- and fixed-size patch-based mixer algorithm with transfer learning for deep feature extraction. Our study dataset comprised 6,687 urinary sediment images belonging to seven classes: Cast, Crystal, Epithelia, Epithelial nuclei, Erythrocyte, Leukocyte, and Mycete. The developed model consists of four layers: (1) an ACM-based mixer to generate mixed images from resized 224 × 224 input images using fixed-size 16 × 16 patches; (2) DenseNet201 pre-trained on ImageNet1K to extract 1,920 features from each raw input image, and its six corresponding mixed images were concatenated to form a final feature vector of length 13,440; (3) iterative neighborhood component analysis to select the most discriminative feature vector of optimal length 342, determined using a k-nearest neighbor (kNN)-based loss function calculator; and (4) shallow kNN-based classification with ten-fold cross-validation. Our model achieved 98.52% overall accuracy for seven-class classification, outperforming published models for urinary cell and sediment analysis. We demonstrated the feasibility and accuracy of deep feature engineering using an ACM-based mixer algorithm for image preprocessing combined with pre-trained DenseNet201 for feature extraction. The classification model was both demonstrably accurate and computationally lightweight, making it ready for implementation in real-world image-based urine sediment analysis applications.
    Matched MeSH terms: Algorithms*
  6. Guure CB, Ibrahim NA, Adam MB
    Comput Math Methods Med, 2013;2013:849520.
    PMID: 23476718 DOI: 10.1155/2013/849520
    Interval-censored data consist of adjacent inspection times that surround an unknown failure time. We have in this paper reviewed the classical approach which is maximum likelihood in estimating the Weibull parameters with interval-censored data. We have also considered the Bayesian approach in estimating the Weibull parameters with interval-censored data under three loss functions. This study became necessary because of the limited discussion in the literature, if at all, with regard to estimating the Weibull parameters with interval-censored data using Bayesian. A simulation study is carried out to compare the performances of the methods. A real data application is also illustrated. It has been observed from the study that the Bayesian estimator is preferred to the classical maximum likelihood estimator for both the scale and shape parameters.
    Matched MeSH terms: Algorithms
  7. Acharya UR, Bhat S, Koh JEW, Bhandary SV, Adeli H
    Comput Biol Med, 2017 Sep 01;88:72-83.
    PMID: 28700902 DOI: 10.1016/j.compbiomed.2017.06.022
    Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system.
    Matched MeSH terms: Algorithms*
  8. Acharya UR, Hagiwara Y, Adeli H
    Epilepsy Behav, 2018 11;88:251-261.
    PMID: 30317059 DOI: 10.1016/j.yebeh.2018.09.030
    In the past two decades, significant advances have been made on automated electroencephalogram (EEG)-based diagnosis of epilepsy and seizure detection. A number of innovative algorithms have been introduced that can aid in epilepsy diagnosis with a high degree of accuracy. In recent years, the frontiers of computational epilepsy research have moved to seizure prediction, a more challenging problem. While antiepileptic medication can result in complete seizure freedom in many patients with epilepsy, up to one-third of patients living with epilepsy will have medically intractable epilepsy, where medications reduce seizure frequency but do not completely control seizures. If a seizure can be predicted prior to its clinical manifestation, then there is potential for abortive treatment to be given, either self-administered or via an implanted device administering medication or electrical stimulation. This will have a far-reaching impact on the treatment of epilepsy and patient's quality of life. This paper presents a state-of-the-art review of recent efforts and journal articles on seizure prediction. The technologies developed for epilepsy diagnosis and seizure detection are being adapted and extended for seizure prediction. The paper ends with some novel ideas for seizure prediction using the increasingly ubiquitous machine learning technology, particularly deep neural network machine learning.
    Matched MeSH terms: Algorithms
  9. Ahmad Fadzil M, Ngah NF, George TM, Izhar LI, Nugroho H, Adi Nugroho H
    PMID: 21097305 DOI: 10.1109/IEMBS.2010.5628041
    Diabetic retinopathy (DR) is a sight threatening complication due to diabetes mellitus that affects the retina. At present, the classification of DR is based on the International Clinical Diabetic Retinopathy Disease Severity. In this paper, FAZ enlargement with DR progression is investigated to enable a new and an effective grading protocol DR severity in an observational clinical study. The performance of a computerised DR monitoring and grading system that digitally analyses colour fundus image to measure the enlargement of FAZ and grade DR is evaluated. The range of FAZ area is optimised to accurately determine DR severity stage and progression stages using a Gaussian Bayes classifier. The system achieves high accuracies of above 96%, sensitivities higher than 88% and specificities higher than 96%, in grading of DR severity. In particular, high sensitivity (100%), specificity (>98%) and accuracy (99%) values are obtained for No DR (normal) and Severe NPDR/PDR stages. The system performance indicates that the DR system is suitable for early detection of DR and for effective treatment of severe cases.
    Matched MeSH terms: Algorithms
  10. Mahdi MA, Sheih SJ, Adikan FR
    Opt Express, 2009 Jun 08;17(12):10069-75.
    PMID: 19506658
    We demonstrate a simplified algorithm to manifest the contribution of amplified spontaneous emission in variable gain-flattened Erbium-doped fiber amplifier (EDFA). The detected signal power at the input and output ports of EDFA comprises of both signal and noise. The generated amplified spontaneous emission from EDFA cannot be differentiated by photodetector which leads to underestimation of the targeted gain value. This gain penalty must be taken into consideration in order to obtain the accurate gain level. By taking the average gain penalty within the dynamic gain range, the targeted output power is set higher than the desired level. Thus, the errors are significantly reduced to less than 0.15 dB from 15 dB to 30 dB desired gain values.
    Matched MeSH terms: Algorithms*
  11. Ali SS, Moinuddin M, Raza K, Adil SH
    ScientificWorldJournal, 2014;2014:850189.
    PMID: 24987745 DOI: 10.1155/2014/850189
    Radial basis function neural networks are used in a variety of applications such as pattern recognition, nonlinear identification, control and time series prediction. In this paper, the learning algorithm of radial basis function neural networks is analyzed in a feedback structure. The robustness of the learning algorithm is discussed in the presence of uncertainties that might be due to noisy perturbations at the input or to modeling mismatch. An intelligent adaptation rule is developed for the learning rate of RBFNN which gives faster convergence via an estimate of error energy while giving guarantee to the l 2 stability governed by the upper bounding via small gain theorem. Simulation results are presented to support our theoretical development.
    Matched MeSH terms: Algorithms
  12. Bukhari MM, Ghazal TM, Abbas S, Khan MA, Farooq U, Wahbah H, et al.
    Comput Intell Neurosci, 2022;2022:3606068.
    PMID: 35126487 DOI: 10.1155/2022/3606068
    Smart applications and intelligent systems are being developed that are self-reliant, adaptive, and knowledge-based in nature. Emergency and disaster management, aerospace, healthcare, IoT, and mobile applications, among them, revolutionize the world of computing. Applications with a large number of growing devices have transformed the current design of centralized cloud impractical. Despite the use of 5G technology, delay-sensitive applications and cloud cannot go parallel due to exceeding threshold values of certain parameters like latency, bandwidth, response time, etc. Middleware proves to be a better solution to cope up with these issues while satisfying the high requirements task offloading standards. Fog computing is recommended middleware in this research article in view of the fact that it provides the services to the edge of the network; delay-sensitive applications can be entertained effectively. On the contrary, fog nodes contain a limited set of resources that may not process all tasks, especially of computation-intensive applications. Additionally, fog is not the replacement of the cloud, rather supplement to the cloud, both behave like counterparts and offer their services correspondingly to compliance the task needs but fog computing has relatively closer proximity to the devices comparatively cloud. The problem arises when a decision needs to take what is to be offloaded: data, computation, or application, and more specifically where to offload: either fog or cloud and how much to offload. Fog-cloud collaboration is stochastic in terms of task-related attributes like task size, duration, arrival rate, and required resources. Dynamic task offloading becomes crucial in order to utilize the resources at fog and cloud to improve QoS. Since this formation of task offloading policy is a bit complex in nature, this problem is addressed in the research article and proposes an intelligent task offloading model. Simulation results demonstrate the authenticity of the proposed logistic regression model acquiring 86% accuracy compared to other algorithms and confidence in the predictive task offloading policy by making sure process consistency and reliability.
    Matched MeSH terms: Algorithms*
  13. Hussein AF, Hashim SJ, Aziz AFA, Rokhani FZ, Adnan WAW
    J Med Syst, 2017 Nov 29;42(1):15.
    PMID: 29188389 DOI: 10.1007/s10916-017-0871-8
    The non-stationary and multi-frequency nature of biomedical signal activities makes the use of time-frequency distributions (TFDs) for analysis inevitable. Time-frequency analysis provides simultaneous interpretations in both time and frequency domain enabling comprehensive explanation, presentation and interpretation of electrocardiogram (ECG) signals. The diversity of TFDs and specific properties for each type show the need to determine the best TFD for ECG analysis. In this study, a performance evaluation of five TFDs in term of ECG abnormality detection is presented. The detection criteria based on extracted features from most important ECG signal components (QRS) to detect normal and abnormal cases. This is achieved by estimating its energy concentration magnitude using the TFDs. The TFDs analyse ECG signals in one-minute interval instead of conventional time domain approach that analyses based on beat or frame containing several beats. The MIT-BIH normal sinus rhythm ECG database total records of 18 long-term ECG sampled at 128 Hz have been analysed. The tested TFDs include Dual-Tree Wavelet Transform, Spectrogram, Pseudo Wigner-Ville, Choi-Williams, and Born-Jordan. Each record is divided into one-minute slots, which is not considered previously, and analysed. The sample periods (slots) are randomly selected ten minutes interval for each record. This result with 99.44% detection accuracy for 15,735 ECG beats shows that Choi-Williams distribution is most reliable to be used for heart problem detection especially in automated systems that provide continuous monitoring for long time duration.
    Matched MeSH terms: Algorithms
  14. Bichi AA, Samsudin R, Hassan R, Hasan LRA, Ado Rogo A
    PLoS One, 2023;18(5):e0285376.
    PMID: 37159449 DOI: 10.1371/journal.pone.0285376
    Automatic text summarization is one of the most promising solutions to the ever-growing challenges of textual data as it produces a shorter version of the original document with fewer bytes, but the same information as the original document. Despite the advancements in automatic text summarization research, research involving the development of automatic text summarization methods for documents written in Hausa, a Chadic language widely spoken in West Africa by approximately 150,000,000 people as either their first or second language, is still in early stages of development. This study proposes a novel graph-based extractive single-document summarization method for Hausa text by modifying the existing PageRank algorithm using the normalized common bigrams count between adjacent sentences as the initial vertex score. The proposed method is evaluated using a primarily collected Hausa summarization evaluation dataset comprising of 113 Hausa news articles on ROUGE evaluation toolkits. The proposed approach outperformed the standard methods using the same datasets. It outperformed the TextRank method by 2.1%, LexRank by 12.3%, centroid-based method by 19.5%, and BM25 method by 17.4%.
    Matched MeSH terms: Algorithms*
  15. Hidayat W, Shakaff AY, Ahmad MN, Adom AH
    Sensors (Basel), 2010;10(5):4675-85.
    PMID: 22399899 DOI: 10.3390/s100504675
    Presently, the quality assurance of agarwood oil is performed by sensory panels which has significant drawbacks in terms of objectivity and repeatability. In this paper, it is shown how an electronic nose (e-nose) may be successfully utilised for the classification of agarwood oil. Hierarchical Cluster Analysis (HCA) and Principal Component Analysis (PCA), were used to classify different types of oil. The HCA produced a dendrogram showing the separation of e-nose data into three different groups of oils. The PCA scatter plot revealed a distinct separation between the three groups. An Artificial Neural Network (ANN) was used for a better prediction of unknown samples.
    Matched MeSH terms: Algorithms
  16. Nataraj SK, Paulraj MP, Yaacob SB, Adom AHB
    J Med Signals Sens, 2020 11 11;10(4):228-238.
    PMID: 33575195 DOI: 10.4103/jmss.JMSS_52_19
    Background: A simple data collection approach based on electroencephalogram (EEG) measurements has been proposed in this study to implement a brain-computer interface, i.e., thought-controlled wheelchair navigation system with communication assistance.

    Method: The EEG signals are recorded for seven simple tasks using the designed data acquisition procedure. These seven tasks are conceivably used to control wheelchair movement and interact with others using any odd-ball paradigm. The proposed system records EEG signals from 10 individuals at eight-channel locations, during which the individual executes seven different mental tasks. The acquired brainwave patterns have been processed to eliminate noise, including artifacts and powerline noise, and are then partitioned into six different frequency bands. The proposed cross-correlation procedure then employs the segmented frequency bands from each channel to extract features. The cross-correlation procedure was used to obtain the coefficients in the frequency domain from consecutive frame samples. Then, the statistical measures ("minimum," "mean," "maximum," and "standard deviation") were derived from the cross-correlated signals. Finally, the extracted feature sets were validated through online sequential-extreme learning machine algorithm.

    Results and Conclusion: The results of the classification networks were compared with each set of features, and the results indicated that μ (r) feature set based on cross-correlation signals had the best performance with a recognition rate of 91.93%.

    Matched MeSH terms: Algorithms
  17. Tran HNT, Thomas JJ, Ahamed Hassain Malim NH
    PeerJ, 2022;10:e13163.
    PMID: 35578674 DOI: 10.7717/peerj.13163
    The exploration of drug-target interactions (DTI) is an essential stage in the drug development pipeline. Thanks to the assistance of computational models, notably in the deep learning approach, scientists have been able to shorten the time spent on this stage. Widely practiced deep learning algorithms such as convolutional neural networks and recurrent neural networks are commonly employed in DTI prediction projects. However, they can hardly utilize the natural graph structure of molecular inputs. For that reason, a graph neural network (GNN) is an applicable choice for learning the chemical and structural characteristics of molecules when it represents molecular compounds as graphs and learns the compound features from those graphs. In an effort to construct an advanced deep learning-based model for DTI prediction, we propose Deep Neural Computation (DeepNC), which is a framework utilizing three GNN algorithms: Generalized Aggregation Networks (GENConv), Graph Convolutional Networks (GCNConv), and Hypergraph Convolution-Hypergraph Attention (HypergraphConv). In short, our framework learns the features of drugs and targets by the layers of GNN and 1-D convolution network, respectively. Then, representations of the drugs and targets are fed into fully-connected layers to predict the binding affinity values. The models of DeepNC were evaluated on two benchmarked datasets (Davis, Kiba) and one independently proposed dataset (Allergy) to confirm that they are suitable for predicting the binding affinity of drugs and targets. Moreover, compared to the results of baseline methods that worked on the same problem, DeepNC proves to improve the performance in terms of mean square error and concordance index.
    Matched MeSH terms: Algorithms
  18. Rafatullah M, Sulaiman O, Hashim R, Ahmad A
    J Hazard Mater, 2009 Oct 30;170(2-3):969-77.
    PMID: 19520510 DOI: 10.1016/j.jhazmat.2009.05.066
    The present study proposed the use of meranti sawdust in the removal of Cu(II), Cr(III), Ni(II) and Pb(II) ions from synthetic aqueous solutions. Batch adsorption studies showed that meranti sawdust was able to adsorb Cu(II), Cr(III), Ni(II) and Pb(II) ions from aqueous solutions in the concentration range 1-200mg/L. The adsorption was favoured with maximum adsorption at pH 6, whereas the adsorption starts at pH 1 for all metal ions. The effects of contact time, initial concentration of metal ions, adsorbent dosage and temperature have been reported. The applicability of Langmuir, Freundlich, and Dubinin-Radushkevich (D-R) isotherm was tried for the system to completely understand the adsorption isotherm processes. The adsorption kinetics tested with pseudo-first-order and pseudo-second-order models yielded high R(2) values from 0.850 to 0.932 and from 0.991 to 0.999, respectively. The meranti sawdust was found to be cost effective and has good efficiency to remove these toxic metal ions from aqueous solution.
    Matched MeSH terms: Algorithms
  19. Ahmad AA, Hameed BH, Ahmad AL
    J Hazard Mater, 2009 Oct 30;170(2-3):612-9.
    PMID: 19515487 DOI: 10.1016/j.jhazmat.2009.05.021
    The purpose of this work is to obtain optimal preparation conditions for activated carbons prepared from rattan sawdust (RSAC) for removal of disperse dye from aqueous solution. The RSAC was prepared by chemical activation with phosphoric acid using response surface methodology (RSM). RSM based on a three-variable central composite design was used to determine the effect of activation temperature (400-600 degrees C), activation time (1-3h) and H(3)PO(4):precursor (wt%) impregnation ratio (3:1-6:1) on C.I. Disperse Orange 30 (DO30) percentage removal and activated carbon yield were investigated. Based on the central composite design, quadratic model was developed to correlate the preparation variables to the two responses. The most influential factor on each experimental design responses was identified from the analysis of variance (ANOVA). The optimum conditions for preparation of RSAC, which were based on response surface and contour plots, were found as follows: temperature of 470 degrees C, activation time of 2h and 14min and chemical impregnation ratio of 4.45.
    Matched MeSH terms: Algorithms
  20. Nur Ashida Salim, Muhammad Azizi Kaprowi, Ahmad Asri Abd Samat
    MyJurnal
    Space Vector Pulse Width Modulation (SVPWM) method is widely used as a modulation technique
    to drive a three-phase inverter. It is an advanced computational intensive method used in pulse width modulation (PWM) algorithm for the three-phase voltage source inverter. Compared with the other PWM techniques, SVPWM is easier to implement, thus, it is the most preferred technique among others. Mathematical model for SVPWM was developed using MATLAB/ Simulink software. In this paper, the interface between MATLAB Simulink with the three-phase inverter by using Arduino Uno microcontroller is proposed. Arduino Uno generates the SVPWM signals for Permanent Magnet Synchronous Motor (PMSM) and is described in this paper. This work consists of software and hardware implementations. Simulation was done via Matlab/Simulink software to verify the effectiveness of the system and to measure the percentage of Total Harmonic Distortion (THD). The results show that SVPWM technique is able to drive the three-phase inverter with the Arduino UNO.
    Matched MeSH terms: Algorithms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links