Displaying publications 1 - 20 of 126 in total

Abstract:
Sort:
  1. Baharuddin MY, Salleh ShH, Hamedi M, Zulkifly AH, Lee MH, Mohd Noor A, et al.
    Biomed Res Int, 2014;2014:478248.
    PMID: 24800230 DOI: 10.1155/2014/478248
    Stress shielding and micromotion are two major issues which determine the success of newly designed cementless femoral stems. The correlation of experimental validation with finite element analysis (FEA) is commonly used to evaluate the stress distribution and fixation stability of the stem within the femoral canal. This paper focused on the applications of feature extraction and pattern recognition using support vector machine (SVM) to determine the primary stability of the implant. We measured strain with triaxial rosette at the metaphyseal region and micromotion with linear variable direct transducer proximally and distally using composite femora. The root mean squares technique is used to feed the classifier which provides maximum likelihood estimation of amplitude, and radial basis function is used as the kernel parameter which mapped the datasets into separable hyperplanes. The results showed 100% pattern recognition accuracy using SVM for both strain and micromotion. This indicates that DSP could be applied in determining the femoral stem primary stability with high pattern recognition accuracy in biomechanical testing.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  2. Lee NK, Fong PK, Abdullah MT
    Biomed Mater Eng, 2014;24(6):3807-14.
    PMID: 25227097 DOI: 10.3233/BME-141210
    Using Genetic Algorithm, this paper presents a modelling method to generate novel logical-based features from DNA sequences enriched with H3K4mel histone signatures. Current histone signature is mostly represented using k-mers content features incapable of representing all the possible complex interactions of various DNA segments. The main contributions are, among others: (a) demonstrating that there are complex interactions among sequence segments in the histone regions; (b) developing a parse tree representation of the logical complex features. The proposed novel feature is compared to the k-mers content features using datasets from the mouse (mm9) genome. Evaluation results show that the new feature improves the prediction performance as shown by f-measure for all datasets tested. Also, it is discovered that tree-based features generated from a single chromosome can be generalized to predict histone marks in other chromosomes not used in the training. These findings have a great impact on feature design considerations for histone signatures as well as other classifier design features.
    Matched MeSH terms: Pattern Recognition, Automated/methods
  3. Yap KS, Lim CP, Abidin IZ
    IEEE Trans Neural Netw, 2008 Sep;19(9):1641-6.
    PMID: 18779094 DOI: 10.1109/TNN.2008.2000992
    In this brief, a new neural network model called generalized adaptive resonance theory (GART) is introduced. GART is a hybrid model that comprises a modified Gaussian adaptive resonance theory (MGA) and the generalized regression neural network (GRNN). It is an enhanced version of the GRNN, which preserves the online learning properties of adaptive resonance theory (ART). A series of empirical studies to assess the effectiveness of GART in classification, regression, and time series prediction tasks is conducted. The results demonstrate that GART is able to produce good performances as compared with those of other methods, including the online sequential extreme learning machine (OSELM) and sequential learning radial basis function (RBF) neural network models.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  4. Raghavendra U, Gudigar A, Bhandary SV, Rao TN, Ciaccio EJ, Acharya UR
    J Med Syst, 2019 Jul 30;43(9):299.
    PMID: 31359230 DOI: 10.1007/s10916-019-1427-x
    Glaucoma is a type of eye condition which may result in partial or consummate vision loss. Higher intraocular pressure is the leading cause for this condition. Screening for glaucoma and early detection can avert vision loss. Computer aided diagnosis (CAD) is an automated process with the potential to identify glaucoma early through quantitative analysis of digital fundus images. Preparing an effective model for CAD requires a large database. This study presents a CAD tool for the precise detection of glaucoma using a machine learning approach. An autoencoder is trained to determine effective and important features from fundus images. These features are used to develop classes of glaucoma for testing. The method achieved an F - measure value of 0.95 utilizing 1426 digital fundus images (589 control and 837 glaucoma). The efficacy of the system is evident, and is suggestive of its possible utility as an additional tool for verification of clinical decisions.
    Matched MeSH terms: Pattern Recognition, Automated/methods
  5. Adam M, Oh SL, Sudarshan VK, Koh JE, Hagiwara Y, Tan JH, et al.
    Comput Methods Programs Biomed, 2018 Jul;161:133-143.
    PMID: 29852956 DOI: 10.1016/j.cmpb.2018.04.018
    Cardiovascular diseases (CVDs) are the leading cause of deaths worldwide. The rising mortality rate can be reduced by early detection and treatment interventions. Clinically, electrocardiogram (ECG) signal provides useful information about the cardiac abnormalities and hence employed as a diagnostic modality for the detection of various CVDs. However, subtle changes in these time series indicate a particular disease. Therefore, it may be monotonous, time-consuming and stressful to inspect these ECG beats manually. In order to overcome this limitation of manual ECG signal analysis, this paper uses a novel discrete wavelet transform (DWT) method combined with nonlinear features for automated characterization of CVDs. ECG signals of normal, and dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM) and myocardial infarction (MI) are subjected to five levels of DWT. Relative wavelet of four nonlinear features such as fuzzy entropy, sample entropy, fractal dimension and signal energy are extracted from the DWT coefficients. These features are fed to sequential forward selection (SFS) technique and then ranked using ReliefF method. Our proposed methodology achieved maximum classification accuracy (acc) of 99.27%, sensitivity (sen) of 99.74%, and specificity (spec) of 98.08% with K-nearest neighbor (kNN) classifier using 15 features ranked by the ReliefF method. Our proposed methodology can be used by clinical staff to make faster and accurate diagnosis of CVDs. Thus, the chances of survival can be significantly increased by early detection and treatment of CVDs.
    Matched MeSH terms: Pattern Recognition, Automated*
  6. Yildirim O, Baloglu UB, Tan RS, Ciaccio EJ, Acharya UR
    Comput Methods Programs Biomed, 2019 Jul;176:121-133.
    PMID: 31200900 DOI: 10.1016/j.cmpb.2019.05.004
    BACKGROUND AND OBJECTIVE: For diagnosis of arrhythmic heart problems, electrocardiogram (ECG) signals should be recorded and monitored. The long-term signal records obtained are analyzed by expert cardiologists. Devices such as the Holter monitor have limited hardware capabilities. For improved diagnostic capacity, it would be helpful to detect arrhythmic signals automatically. In this study, a novel approach is presented as a candidate solution for these issues.

    METHODS: A convolutional auto-encoder (CAE) based nonlinear compression structure is implemented to reduce the signal size of arrhythmic beats. Long-short term memory (LSTM) classifiers are employed to automatically recognize arrhythmias using ECG features, which are deeply coded with the CAE network.

    RESULTS: Based upon the coded ECG signals, both storage requirement and classification time were considerably reduced. In experimental studies conducted with the MIT-BIH arrhythmia database, ECG signals were compressed by an average 0.70% percentage root mean square difference (PRD) rate, and an accuracy of over 99.0% was observed.

    CONCLUSIONS: One of the significant contributions of this study is that the proposed approach can significantly reduce time duration when using LSTM networks for data analysis. Thus, a novel and effective approach was proposed for both ECG signal compression, and their high-performance automatic recognition, with very low computational cost.

    Matched MeSH terms: Pattern Recognition, Automated
  7. Acharya UR, Bhat S, Koh JEW, Bhandary SV, Adeli H
    Comput Biol Med, 2017 Sep 01;88:72-83.
    PMID: 28700902 DOI: 10.1016/j.compbiomed.2017.06.022
    Glaucoma is an optic neuropathy defined by characteristic damage to the optic nerve and accompanying visual field deficits. Early diagnosis and treatment are critical to prevent irreversible vision loss and ultimate blindness. Current techniques for computer-aided analysis of the optic nerve and retinal nerve fiber layer (RNFL) are expensive and require keen interpretation by trained specialists. Hence, an automated system is highly desirable for a cost-effective and accurate screening for the diagnosis of glaucoma. This paper presents a new methodology and a computerized diagnostic system. Adaptive histogram equalization is used to convert color images to grayscale images followed by convolution of these images with Leung-Malik (LM), Schmid (S), and maximum response (MR4 and MR8) filter banks. The basic microstructures in typical images are called textons. The convolution process produces textons. Local configuration pattern (LCP) features are extracted from these textons. The significant features are selected using a sequential floating forward search (SFFS) method and ranked using the statistical t-test. Finally, various classifiers are used for classification of images into normal and glaucomatous classes. A high classification accuracy of 95.8% is achieved using six features obtained from the LM filter bank and the k-nearest neighbor (kNN) classifier. A glaucoma integrative index (GRI) is also formulated to obtain a reliable and effective system.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  8. Moghaddasi Z, Jalab HA, Md Noor R, Aghabozorgi S
    ScientificWorldJournal, 2014;2014:606570.
    PMID: 25295304 DOI: 10.1155/2014/606570
    Digital image forgery is becoming easier to perform because of the rapid development of various manipulation tools. Image splicing is one of the most prevalent techniques. Digital images had lost their trustability, and researches have exerted considerable effort to regain such trustability by focusing mostly on algorithms. However, most of the proposed algorithms are incapable of handling high dimensionality and redundancy in the extracted features. Moreover, existing algorithms are limited by high computational time. This study focuses on improving one of the image splicing detection algorithms, that is, the run length run number algorithm (RLRN), by applying two dimension reduction methods, namely, principal component analysis (PCA) and kernel PCA. Support vector machine is used to distinguish between authentic and spliced images. Results show that kernel PCA is a nonlinear dimension reduction method that has the best effect on R, G, B, and Y channels and gray-scale images.
    Matched MeSH terms: Pattern Recognition, Automated/methods*; Pattern Recognition, Automated/trends
  9. Rahman HA, Che Ani AI, Harun SW, Yasin M, Apsari R, Ahmad H
    J Biomed Opt, 2012 Jul;17(7):071308.
    PMID: 22894469 DOI: 10.1117/1.JBO.17.7.071308
    The purpose of this study is to investigate the potential of intensity modulated fiber optic displacement sensor scanning system for the imaging of dental cavity. Here, we discuss our preliminary results in the imaging of cavities on various teeth surfaces, as well as measurement of the diameter of the cavities which are represented by drilled holes on the teeth surfaces. Based on the analysis of displacement measurement, the sensitivities and linear range for the molar, canine, hybrid composite resin, and acrylic surfaces are obtained at 0.09667 mV/mm and 0.45 mm; 0.775 mV/mm and 0.4 mm; 0.5109 mV/mm and 0.5 mm; and 0.25 mV/mm and 0.5 mm, respectively, with a good linearity of more than 99%. The results also show a clear distinction between the cavity and surrounding tooth region. The stability, simplicity of design, and low cost of fabrication make it suitable for restorative dentistry.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  10. Hannan MA, Arebey M, Begum RA, Basri H, Al Mamun MA
    Waste Manag, 2016 Apr;50:10-9.
    PMID: 26868844 DOI: 10.1016/j.wasman.2016.01.046
    This paper presents a CBIR system to investigate the use of image retrieval with an extracted texture from the image of a bin to detect the bin level. Various similarity distances like Euclidean, Bhattacharyya, Chi-squared, Cosine, and EMD are used with the CBIR system for calculating and comparing the distance between a query image and the images in a database to obtain the highest performance. In this study, the performance metrics is based on two quantitative evaluation criteria. The first one is the average retrieval rate based on the precision-recall graph and the second is the use of F1 measure which is the weighted harmonic mean of precision and recall. In case of feature extraction, texture is used as an image feature for bin level detection system. Various experiments are conducted with different features extraction techniques like Gabor wavelet filter, gray level co-occurrence matrix (GLCM), and gray level aura matrix (GLAM) to identify the level of the bin and its surrounding area. Intensive tests are conducted among 250bin images to assess the accuracy of the proposed feature extraction techniques. The average retrieval rate is used to evaluate the performance of the retrieval system. The result shows that, the EMD distance achieved high accuracy and provides better performance than the other distances.
    Matched MeSH terms: Pattern Recognition, Automated
  11. Al-Dabbagh MM, Salim N, Rehman A, Alkawaz MH, Saba T, Al-Rodhaan M, et al.
    ScientificWorldJournal, 2014;2014:612787.
    PMID: 25309952 DOI: 10.1155/2014/612787
    This paper presents a novel features mining approach from documents that could not be mined via optical character recognition (OCR). By identifying the intimate relationship between the text and graphical components, the proposed technique pulls out the Start, End, and Exact values for each bar. Furthermore, the word 2-gram and Euclidean distance methods are used to accurately detect and determine plagiarism in bar charts.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  12. Al-Saiagh W, Tiun S, Al-Saffar A, Awang S, Al-Khaleefa AS
    PLoS One, 2018;13(12):e0208695.
    PMID: 30571777 DOI: 10.1371/journal.pone.0208695
    Word sense disambiguation (WSD) is the process of identifying an appropriate sense for an ambiguous word. With the complexity of human languages in which a single word could yield different meanings, WSD has been utilized by several domains of interests such as search engines and machine translations. The literature shows a vast number of techniques used for the process of WSD. Recently, researchers have focused on the use of meta-heuristic approaches to identify the best solutions that reflect the best sense. However, the application of meta-heuristic approaches remains limited and thus requires the efficient exploration and exploitation of the problem space. Hence, the current study aims to propose a hybrid meta-heuristic method that consists of particle swarm optimization (PSO) and simulated annealing to find the global best meaning of a given text. Different semantic measures have been utilized in this model as objective functions for the proposed hybrid PSO. These measures consist of JCN and extended Lesk methods, which are combined effectively in this work. The proposed method is tested using a three-benchmark dataset (SemCor 3.0, SensEval-2, and SensEval-3). Results show that the proposed method has superior performance in comparison with state-of-the-art approaches.
    Matched MeSH terms: Pattern Recognition, Automated
  13. Al-Quraishi MS, Ishak AJ, Ahmad SA, Hasan MK, Al-Qurishi M, Ghapanchizadeh H, et al.
    Med Biol Eng Comput, 2017 May;55(5):747-758.
    PMID: 27484411 DOI: 10.1007/s11517-016-1551-4
    Electromyography (EMG)-based control is the core of prostheses, orthoses, and other rehabilitation devices in recent research. Nonetheless, EMG is difficult to use as a control signal given the complex nature of the signal. To overcome this problem, the researchers employed a pattern recognition technique. EMG pattern recognition mainly involves four stages: signal detection, preprocessing feature extraction, dimensionality reduction, and classification. In particular, the success of any pattern recognition technique depends on the feature extraction stage. In this study, a modified time-domain features set and logarithmic transferred time-domain features (LTD) were evaluated and compared with other traditional time-domain features set (TTD). Three classifiers were employed to assess the two feature sets, namely linear discriminant analysis (LDA), k nearest neighborhood, and Naïve Bayes. Results indicated the superiority of the new time-domain feature set LTD, on conventional time-domain features TTD with the average classification accuracy of 97.23 %. In addition, the LDA classifier outperformed the other two classifiers considered in this study.
    Matched MeSH terms: Pattern Recognition, Automated/methods
  14. Tayan O, Kabir MN, Alginahi YM
    ScientificWorldJournal, 2014;2014:514652.
    PMID: 25254247 DOI: 10.1155/2014/514652
    This paper addresses the problems and threats associated with verification of integrity, proof of authenticity, tamper detection, and copyright protection for digital-text content. Such issues were largely addressed in the literature for images, audio, and video, with only a few papers addressing the challenge of sensitive plain-text media under known constraints. Specifically, with text as the predominant online communication medium, it becomes crucial that techniques are deployed to protect such information. A number of digital-signature, hashing, and watermarking schemes have been proposed that essentially bind source data or embed invisible data in a cover media to achieve its goal. While many such complex schemes with resource redundancies are sufficient in offline and less-sensitive texts, this paper proposes a hybrid approach based on zero-watermarking and digital-signature-like manipulations for sensitive text documents in order to achieve content originality and integrity verification without physically modifying the cover text in anyway. The proposed algorithm was implemented and shown to be robust against undetected content modifications and is capable of confirming proof of originality whilst detecting and locating deliberate/nondeliberate tampering. Additionally, enhancements in resource utilisation and reduced redundancies were achieved in comparison to traditional encryption-based approaches. Finally, analysis and remarks are made about the current state of the art, and future research issues are discussed under the given constraints.
    Matched MeSH terms: Pattern Recognition, Automated
  15. Ewe ELR, Lee CP, Lim KM, Kwek LC, Alqahtani A
    PLoS One, 2024;19(4):e0298699.
    PMID: 38574042 DOI: 10.1371/journal.pone.0298699
    Sign language recognition presents significant challenges due to the intricate nature of hand gestures and the necessity to capture fine-grained details. In response to these challenges, a novel approach is proposed-Lightweight Attentive VGG16 with Random Forest (LAVRF) model. LAVRF introduces a refined adaptation of the VGG16 model integrated with attention modules, complemented by a Random Forest classifier. By streamlining the VGG16 architecture, the Lightweight Attentive VGG16 effectively manages complexity while incorporating attention mechanisms that dynamically concentrate on pertinent regions within input images, resulting in enhanced representation learning. Leveraging the Random Forest classifier provides notable benefits, including proficient handling of high-dimensional feature representations, reduction of variance and overfitting concerns, and resilience against noisy and incomplete data. Additionally, the model performance is further optimized through hyperparameter optimization, utilizing the Optuna in conjunction with hill climbing, which efficiently explores the hyperparameter space to discover optimal configurations. The proposed LAVRF model demonstrates outstanding accuracy on three datasets, achieving remarkable results of 99.98%, 99.90%, and 100% on the American Sign Language, American Sign Language with Digits, and NUS Hand Posture datasets, respectively.
    Matched MeSH terms: Pattern Recognition, Automated/methods
  16. Loh KB, Ramli N, Tan LK, Roziah M, Rahmat K, Ariffin H
    Eur Radiol, 2012 Jul;22(7):1413-26.
    PMID: 22434420 DOI: 10.1007/s00330-012-2396-3
    OBJECTIVES: The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas.

    METHODS: Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction.

    RESULTS: DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued.

    CONCLUSION: DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data.

    KEY POINTS: Diffusion tensor imaging outperforms conventional MRI in depicting white matter maturation. • DTI will become an important clinical tool for diagnosing paediatric neurological diseases. • DTI appears especially helpful for developmental abnormalities, tumours and white matter disease. • An automated processing pipeline assists quantitative analysis of high throughput DTI data.

    Matched MeSH terms: Pattern Recognition, Automated/methods
  17. Yap KS, Lim CP, Au MT
    IEEE Trans Neural Netw, 2011 Dec;22(12):2310-23.
    PMID: 22067292 DOI: 10.1109/TNN.2011.2173502
    Generalized adaptive resonance theory (GART) is a neural network model that is capable of online learning and is effective in tackling pattern classification tasks. In this paper, we propose an improved GART model (IGART), and demonstrate its applicability to power systems. IGART enhances the dynamics of GART in several aspects, which include the use of the Laplacian likelihood function, a new vigilance function, a new match-tracking mechanism, an ordering algorithm for determining the sequence of training data, and a rule extraction capability to elicit if-then rules from the network. To assess the effectiveness of IGART and to compare its performances with those from other methods, three datasets that are related to power systems are employed. The experimental results demonstrate the usefulness of IGART with the rule extraction capability in undertaking classification problems in power systems engineering.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  18. Zain JM, Fauzi AM, Aziz AA
    Conf Proc IEEE Eng Med Biol Soc, 2007 10 20;2006:5459-62.
    PMID: 17946306
    Digital watermarking medical images provides security to the images. The purpose of this study was to see whether digitally watermarked images changed clinical diagnoses when assessed by radiologists. We embedded 256 bits watermark to various medical images in the region of non-interest (RONI) and 480K bits in both region of interest (ROI) and RONI. Our results showed that watermarking medical images did not alter clinical diagnoses. In addition, there was no difference in image quality when visually assessed by the medical radiologists. We therefore concluded that digital watermarking medical images were safe in terms of preserving image quality for clinical purposes.
    Matched MeSH terms: Pattern Recognition, Automated*
  19. Lee CK, Chang CC, Johor A, Othman P, Baba R
    Int J Dermatol, 2015 Jul;54(7):765-70.
    PMID: 25427962 DOI: 10.1111/ijd.12451
    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis.
    Matched MeSH terms: Pattern Recognition, Automated
  20. Saybani MR, Shamshirband S, Golzari S, Wah TY, Saeed A, Mat Kiah ML, et al.
    Med Biol Eng Comput, 2016 Mar;54(2-3):385-99.
    PMID: 26081904 DOI: 10.1007/s11517-015-1323-6
    Tuberculosis is a major global health problem that has been ranked as the second leading cause of death from an infectious disease worldwide, after the human immunodeficiency virus. Diagnosis based on cultured specimens is the reference standard; however, results take weeks to obtain. Slow and insensitive diagnostic methods hampered the global control of tuberculosis, and scientists are looking for early detection strategies, which remain the foundation of tuberculosis control. Consequently, there is a need to develop an expert system that helps medical professionals to accurately diagnose the disease. The objective of this study is to diagnose tuberculosis using a machine learning method. Artificial immune recognition system (AIRS) has been used successfully for diagnosing various diseases. However, little effort has been undertaken to improve its classification accuracy. In order to increase the classification accuracy, this study introduces a new hybrid system that incorporates real tournament selection mechanism into the AIRS. This mechanism is used to control the population size of the model and to overcome the existing selection pressure. Patient epacris reports obtained from the Pasteur laboratory in northern Iran were used as the benchmark data set. The sample consisted of 175 records, from which 114 (65 %) were positive for TB, and the remaining 61 (35 %) were negative. The classification performance was measured through tenfold cross-validation, root-mean-square error, sensitivity, and specificity. With an accuracy of 100 %, RMSE of 0, sensitivity of 100 %, and specificity of 100 %, the proposed method was able to successfully classify tuberculosis cases. In addition, the proposed method is comparable with top classifiers used in this research.
    Matched MeSH terms: Pattern Recognition, Automated*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links