Displaying publications 1 - 20 of 126 in total

Abstract:
Sort:
  1. Ewe ELR, Lee CP, Lim KM, Kwek LC, Alqahtani A
    PLoS One, 2024;19(4):e0298699.
    PMID: 38574042 DOI: 10.1371/journal.pone.0298699
    Sign language recognition presents significant challenges due to the intricate nature of hand gestures and the necessity to capture fine-grained details. In response to these challenges, a novel approach is proposed-Lightweight Attentive VGG16 with Random Forest (LAVRF) model. LAVRF introduces a refined adaptation of the VGG16 model integrated with attention modules, complemented by a Random Forest classifier. By streamlining the VGG16 architecture, the Lightweight Attentive VGG16 effectively manages complexity while incorporating attention mechanisms that dynamically concentrate on pertinent regions within input images, resulting in enhanced representation learning. Leveraging the Random Forest classifier provides notable benefits, including proficient handling of high-dimensional feature representations, reduction of variance and overfitting concerns, and resilience against noisy and incomplete data. Additionally, the model performance is further optimized through hyperparameter optimization, utilizing the Optuna in conjunction with hill climbing, which efficiently explores the hyperparameter space to discover optimal configurations. The proposed LAVRF model demonstrates outstanding accuracy on three datasets, achieving remarkable results of 99.98%, 99.90%, and 100% on the American Sign Language, American Sign Language with Digits, and NUS Hand Posture datasets, respectively.
    Matched MeSH terms: Pattern Recognition, Automated/methods
  2. Govindapillai S, Soon LK, Haw SC
    F1000Res, 2021;10:881.
    PMID: 34900233 DOI: 10.12688/f1000research.72843.2
    Knowledge graph (KG) publishes machine-readable representation of knowledge on the Web. Structured data in the knowledge graph is published using Resource Description Framework (RDF) where knowledge is represented as a triple (subject, predicate, object). Due to the presence of erroneous, outdated or conflicting data in the knowledge graph, the quality of facts cannot be guaranteed. Trustworthiness of facts in knowledge graph can be enhanced by the addition of metadata like the source of information, location and time of the fact occurrence. Since RDF does not support metadata for providing provenance and contextualization, an alternate method, RDF reification is employed by most of the knowledge graphs. RDF reification increases the magnitude of data as several statements are required to represent a single fact. Another limitation for applications that uses provenance data like in the medical domain and in cyber security is that not all facts in these knowledge graphs are annotated with provenance data. In this paper, we have provided an overview of prominent reification approaches together with the analysis of popular, general knowledge graphs Wikidata and YAGO4 with regard to the representation of provenance and context data. Wikidata employs qualifiers to include metadata to facts, while YAGO4 collects metadata from Wikidata qualifiers. However, facts in Wikidata and YAGO4 can be fetched without using reification to cater for applications that do not require metadata. To the best of our knowledge, this is the first paper that investigates the method and the extent of metadata covered by two prominent KGs, Wikidata and YAGO4.
    Matched MeSH terms: Pattern Recognition, Automated*
  3. Silalahi DD, Midi H, Arasan J, Mustafa MS, Caliman JP
    Heliyon, 2020 Jan;6(1):e03176.
    PMID: 32042959 DOI: 10.1016/j.heliyon.2020.e03176
    In practice, the collected spectra are very often composes of complex overtone and many overlapping peaks which may lead to misinterpretation because of its significant nonlinear characteristics. Using linear solution might not be appropriate. In addition, with a high-dimension of dataset due to large number of observations and data points the classical multiple regressions will neglect to fit. These complexities commonly will impact to multicollinearity problem, furthermore the risk of contamination of multiple outliers and high leverage points also increases. To address these problems, a new method called Kernel Partial Diagnostic Robust Potential (KPDRGP) is introduced. The method allows the nonlinear solution which maps nonlinearly the original input

    X

    matrix into higher dimensional feature mapping with corresponds to the Reproducing Kernel Hilbert Spaces (RKHS). In dimensional reduction, the method replaces the dot products calculation of elements in the mapped data to a nonlinear function in the original input space. To prevent the contamination of the multiple outlier and high leverage points the robust procedure using Diagnostic Robust Generalized Potentials (DRGP) algorithm was used. The results verified that using the simulation and real data, the proposed KPDRGP method was superior to the methods in the class of non-kernel and some other robust methods with kernel solution.
    Matched MeSH terms: Pattern Recognition, Automated
  4. Vijayasarveswari V, Andrew AM, Jusoh M, Sabapathy T, Raof RAA, Yasin MNM, et al.
    PLoS One, 2020;15(8):e0229367.
    PMID: 32790672 DOI: 10.1371/journal.pone.0229367
    Breast cancer is the most common cancer among women and it is one of the main causes of death for women worldwide. To attain an optimum medical treatment for breast cancer, an early breast cancer detection is crucial. This paper proposes a multi- stage feature selection method that extracts statistically significant features for breast cancer size detection using proposed data normalization techniques. Ultra-wideband (UWB) signals, controlled using microcontroller are transmitted via an antenna from one end of the breast phantom and are received on the other end. These ultra-wideband analogue signals are represented in both time and frequency domain. The preprocessed digital data is passed to the proposed multi- stage feature selection algorithm. This algorithm has four selection stages. It comprises of data normalization methods, feature extraction, data dimensional reduction and feature fusion. The output data is fused together to form the proposed datasets, namely, 8-HybridFeature, 9-HybridFeature and 10-HybridFeature datasets. The classification performance of these datasets is tested using the Support Vector Machine, Probabilistic Neural Network and Naïve Bayes classifiers for breast cancer size classification. The research findings indicate that the 8-HybridFeature dataset performs better in comparison to the other two datasets. For the 8-HybridFeature dataset, the Naïve Bayes classifier (91.98%) outperformed the Support Vector Machine (90.44%) and Probabilistic Neural Network (80.05%) classifiers in terms of classification accuracy. The finalized method is tested and visualized in the MATLAB based 2D and 3D environment.
    Matched MeSH terms: Pattern Recognition, Automated/methods
  5. Agbolade O, Nazri A, Yaakob R, Ghani AA, Cheah YK
    BMC Bioinformatics, 2019 Dec 02;20(1):619.
    PMID: 31791234 DOI: 10.1186/s12859-019-3153-2
    BACKGROUND: Expression in H-sapiens plays a remarkable role when it comes to social communication. The identification of this expression by human beings is relatively easy and accurate. However, achieving the same result in 3D by machine remains a challenge in computer vision. This is due to the current challenges facing facial data acquisition in 3D; such as lack of homology and complex mathematical analysis for facial point digitization. This study proposes facial expression recognition in human with the application of Multi-points Warping for 3D facial landmark by building a template mesh as a reference object. This template mesh is thereby applied to each of the target mesh on Stirling/ESRC and Bosphorus datasets. The semi-landmarks are allowed to slide along tangents to the curves and surfaces until the bending energy between a template and a target form is minimal and localization error is assessed using Procrustes ANOVA. By using Principal Component Analysis (PCA) for feature selection, classification is done using Linear Discriminant Analysis (LDA).

    RESULT: The localization error is validated on the two datasets with superior performance over the state-of-the-art methods and variation in the expression is visualized using Principal Components (PCs). The deformations show various expression regions in the faces. The results indicate that Sad expression has the lowest recognition accuracy on both datasets. The classifier achieved a recognition accuracy of 99.58 and 99.32% on Stirling/ESRC and Bosphorus, respectively.

    CONCLUSION: The results demonstrate that the method is robust and in agreement with the state-of-the-art results.

    Matched MeSH terms: Pattern Recognition, Automated*
  6. Acharya UR, Fernandes SL, WeiKoh JE, Ciaccio EJ, Fabell MKM, Tanik UJ, et al.
    J Med Syst, 2019 Aug 09;43(9):302.
    PMID: 31396722 DOI: 10.1007/s10916-019-1428-9
    The aim of this work is to develop a Computer-Aided-Brain-Diagnosis (CABD) system that can determine if a brain scan shows signs of Alzheimer's disease. The method utilizes Magnetic Resonance Imaging (MRI) for classification with several feature extraction techniques. MRI is a non-invasive procedure, widely adopted in hospitals to examine cognitive abnormalities. Images are acquired using the T2 imaging sequence. The paradigm consists of a series of quantitative techniques: filtering, feature extraction, Student's t-test based feature selection, and k-Nearest Neighbor (KNN) based classification. Additionally, a comparative analysis is done by implementing other feature extraction procedures that are described in the literature. Our findings suggest that the Shearlet Transform (ST) feature extraction technique offers improved results for Alzheimer's diagnosis as compared to alternative methods. The proposed CABD tool with the ST + KNN technique provided accuracy of 94.54%, precision of 88.33%, sensitivity of 96.30% and specificity of 93.64%. Furthermore, this tool also offered an accuracy, precision, sensitivity and specificity of 98.48%, 100%, 96.97% and 100%, respectively, with the benchmark MRI database.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  7. Pogorelov K, Suman S, Azmadi Hussin F, Saeed Malik A, Ostroukhova O, Riegler M, et al.
    J Appl Clin Med Phys, 2019 Aug;20(8):141-154.
    PMID: 31251460 DOI: 10.1002/acm2.12662
    Wireless capsule endoscopy (WCE) is an effective technology that can be used to make a gastrointestinal (GI) tract diagnosis of various lesions and abnormalities. Due to a long time required to pass through the GI tract, the resulting WCE data stream contains a large number of frames which leads to a tedious job for clinical experts to perform a visual check of each and every frame of a complete patient's video footage. In this paper, an automated technique for bleeding detection based on color and texture features is proposed. The approach combines the color information which is an essential feature for initial detection of frame with bleeding. Additionally, it uses the texture which plays an important role to extract more information from the lesion captured in the frames and allows the system to distinguish finely between borderline cases. The detection algorithm utilizes machine-learning-based classification methods, and it can efficiently distinguish between bleeding and nonbleeding frames and perform pixel-level segmentation of bleeding areas in WCE frames. The performed experimental studies demonstrate the performance of the proposed bleeding detection method in terms of detection accuracy, where we are at least as good as the state-of-the-art approaches. In this research, we have conducted a broad comparison of a number of different state-of-the-art features and classification methods that allows building an efficient and flexible WCE video processing system.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  8. Raghavendra U, Gudigar A, Bhandary SV, Rao TN, Ciaccio EJ, Acharya UR
    J Med Syst, 2019 Jul 30;43(9):299.
    PMID: 31359230 DOI: 10.1007/s10916-019-1427-x
    Glaucoma is a type of eye condition which may result in partial or consummate vision loss. Higher intraocular pressure is the leading cause for this condition. Screening for glaucoma and early detection can avert vision loss. Computer aided diagnosis (CAD) is an automated process with the potential to identify glaucoma early through quantitative analysis of digital fundus images. Preparing an effective model for CAD requires a large database. This study presents a CAD tool for the precise detection of glaucoma using a machine learning approach. An autoencoder is trained to determine effective and important features from fundus images. These features are used to develop classes of glaucoma for testing. The method achieved an F - measure value of 0.95 utilizing 1426 digital fundus images (589 control and 837 glaucoma). The efficacy of the system is evident, and is suggestive of its possible utility as an additional tool for verification of clinical decisions.
    Matched MeSH terms: Pattern Recognition, Automated/methods
  9. Yildirim O, Baloglu UB, Tan RS, Ciaccio EJ, Acharya UR
    Comput Methods Programs Biomed, 2019 Jul;176:121-133.
    PMID: 31200900 DOI: 10.1016/j.cmpb.2019.05.004
    BACKGROUND AND OBJECTIVE: For diagnosis of arrhythmic heart problems, electrocardiogram (ECG) signals should be recorded and monitored. The long-term signal records obtained are analyzed by expert cardiologists. Devices such as the Holter monitor have limited hardware capabilities. For improved diagnostic capacity, it would be helpful to detect arrhythmic signals automatically. In this study, a novel approach is presented as a candidate solution for these issues.

    METHODS: A convolutional auto-encoder (CAE) based nonlinear compression structure is implemented to reduce the signal size of arrhythmic beats. Long-short term memory (LSTM) classifiers are employed to automatically recognize arrhythmias using ECG features, which are deeply coded with the CAE network.

    RESULTS: Based upon the coded ECG signals, both storage requirement and classification time were considerably reduced. In experimental studies conducted with the MIT-BIH arrhythmia database, ECG signals were compressed by an average 0.70% percentage root mean square difference (PRD) rate, and an accuracy of over 99.0% was observed.

    CONCLUSIONS: One of the significant contributions of this study is that the proposed approach can significantly reduce time duration when using LSTM networks for data analysis. Thus, a novel and effective approach was proposed for both ECG signal compression, and their high-performance automatic recognition, with very low computational cost.

    Matched MeSH terms: Pattern Recognition, Automated
  10. Khalid A, Lim E, Chan BT, Abdul Aziz YF, Chee KH, Yap HJ, et al.
    J Magn Reson Imaging, 2019 04;49(4):1006-1019.
    PMID: 30211445 DOI: 10.1002/jmri.26302
    BACKGROUND: Existing clinical diagnostic and assessment methods could be improved to facilitate early detection and treatment of cardiac dysfunction associated with acute myocardial infarction (AMI) to reduce morbidity and mortality.

    PURPOSE: To develop 3D personalized left ventricular (LV) models and thickening assessment framework for assessing regional wall thickening dysfunction and dyssynchrony in AMI patients.

    STUDY TYPE: Retrospective study, diagnostic accuracy.

    SUBJECTS: Forty-four subjects consisting of 15 healthy subjects and 29 AMI patients.

    FIELD STRENGTH/SEQUENCE: 1.5T/steady-state free precession cine MRI scans; LGE MRI scans.

    ASSESSMENT: Quantitative thickening measurements across all cardiac phases were correlated and validated against clinical evaluation of infarct transmurality by an experienced cardiac radiologist based on the American Heart Association (AHA) 17-segment model.

    STATISTICAL TEST: Nonparametric 2-k related sample-based Kruskal-Wallis test; Mann-Whitney U-test; Pearson's correlation coefficient.

    RESULTS: Healthy LV wall segments undergo significant wall thickening (P 50% transmurality) underwent remarkable wall thinning during contraction (thickening index [TI] = 1.46 ± 0.26 mm) as opposed to healthy myocardium (TI = 4.01 ± 1.04 mm). For AMI patients, LV that showed signs of thinning were found to be associated with a significantly higher percentage of dyssynchrony as compared with healthy subjects (dyssynchrony index [DI] = 15.0 ± 5.0% vs. 7.5 ± 2.0%, P 

    Matched MeSH terms: Pattern Recognition, Automated
  11. Olayiwola Babarinsa, Hailiza Kamarulhaili
    MATEMATIKA, 2019;35(1):25-38.
    MyJurnal
    The proposed modified methods of Cramer's rule consider the column vector as well as the coefficient matrix concurrently in the linear system. The modified methods can be applied since Cramer's rule is typically known for solving the linear systems in $WZ$ factorization to yield Z-matrix. Then, we presented our results to show that there is no tangible difference in performance time between Cramer's rule and the modified methods in the factorization from improved versions of MATLAB. Additionally, the Frobenius norm of the modified methods in the factorization is better than using Cramer's rule irrespective of the version of MATLAB used.
    Matched MeSH terms: Pattern Recognition, Automated
  12. Iqbal U, Wah TY, Habib Ur Rehman M, Mujtaba G, Imran M, Shoaib M
    J Med Syst, 2018 Nov 05;42(12):252.
    PMID: 30397730 DOI: 10.1007/s10916-018-1107-2
    Electrocardiography (ECG) sensors play a vital role in the Internet of Medical Things, and these sensors help in monitoring the electrical activity of the heart. ECG signal analysis can improve human life in many ways, from diagnosing diseases among cardiac patients to managing the lifestyles of diabetic patients. Abnormalities in heart activities lead to different cardiac diseases and arrhythmia. However, some cardiac diseases, such as myocardial infarction (MI) and atrial fibrillation (Af), require special attention due to their direct impact on human life. The classification of flattened T wave cases of MI in ECG signals and how much of these cases are similar to ST-T changes in MI remain an open issue for researchers. This article presents a novel contribution to classify MI and Af. To this end, we propose a new approach called deep deterministic learning (DDL), which works by combining predefined heart activities with fused datasets. In this research, we used two datasets. The first dataset, Massachusetts Institute of Technology-Beth Israel Hospital, is publicly available, and we exclusively obtained the second dataset from the University of Malaya Medical Center, Kuala Lumpur Malaysia. We first initiated predefined activities on each individual dataset to recognize patterns between the ST-T change and flattened T wave cases and then used the data fusion approach to merge both datasets in a manner that delivers the most accurate pattern recognition results. The proposed DDL approach is a systematic stage-wise methodology that relies on accurate detection of R peaks in ECG signals, time domain features of ECG signals, and fine tune-up of artificial neural networks. The empirical evaluation shows high accuracy (i.e., ≤99.97%) in pattern matching ST-T changes and flattened T waves using the proposed DDL approach. The proposed pattern recognition approach is a significant contribution to the diagnosis of special cases of MI.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  13. Acharya UR, Raghavendra U, Koh JEW, Meiburger KM, Ciaccio EJ, Hagiwara Y, et al.
    Comput Methods Programs Biomed, 2018 Nov;166:91-98.
    PMID: 30415722 DOI: 10.1016/j.cmpb.2018.10.006
    BACKGROUND AND OBJECTIVE: Liver fibrosis is a type of chronic liver injury that is characterized by an excessive deposition of extracellular matrix protein. Early detection of liver fibrosis may prevent further growth toward liver cirrhosis and hepatocellular carcinoma. In the past, the only method to assess liver fibrosis was through biopsy, but this examination is invasive, expensive, prone to sampling errors, and may cause complications such as bleeding. Ultrasound-based elastography is a promising tool to measure tissue elasticity in real time; however, this technology requires an upgrade of the ultrasound system and software. In this study, a novel computer-aided diagnosis tool is proposed to automatically detect and classify the various stages of liver fibrosis based upon conventional B-mode ultrasound images.

    METHODS: The proposed method uses a 2D contourlet transform and a set of texture features that are efficiently extracted from the transformed image. Then, the combination of a kernel discriminant analysis (KDA)-based feature reduction technique and analysis of variance (ANOVA)-based feature ranking technique was used, and the images were then classified into various stages of liver fibrosis.

    RESULTS: Our 2D contourlet transform and texture feature analysis approach achieved a 91.46% accuracy using only four features input to the probabilistic neural network classifier, to classify the five stages of liver fibrosis. It also achieved a 92.16% sensitivity and 88.92% specificity for the same model. The evaluation was done on a database of 762 ultrasound images belonging to five different stages of liver fibrosis.

    CONCLUSIONS: The findings suggest that the proposed method can be useful to automatically detect and classify liver fibrosis, which would greatly assist clinicians in making an accurate diagnosis.

    Matched MeSH terms: Pattern Recognition, Automated
  14. Ahmed MA, Zaidan BB, Zaidan AA, Salih MM, Lakulu MMB
    Sensors (Basel), 2018 Jul 09;18(7).
    PMID: 29987266 DOI: 10.3390/s18072208
    Loss of the ability to speak or hear exerts psychological and social impacts on the affected persons due to the lack of proper communication. Multiple and systematic scholarly interventions that vary according to context have been implemented to overcome disability-related difficulties. Sign language recognition (SLR) systems based on sensory gloves are significant innovations that aim to procure data on the shape or movement of the human hand. Innovative technology for this matter is mainly restricted and dispersed. The available trends and gaps should be explored in this research approach to provide valuable insights into technological environments. Thus, a review is conducted to create a coherent taxonomy to describe the latest research divided into four main categories: development, framework, other hand gesture recognition, and reviews and surveys. Then, we conduct analyses of the glove systems for SLR device characteristics, develop a roadmap for technology evolution, discuss its limitations, and provide valuable insights into technological environments. This will help researchers to understand the current options and gaps in this area, thus contributing to this line of research.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  15. Adam M, Oh SL, Sudarshan VK, Koh JE, Hagiwara Y, Tan JH, et al.
    Comput Methods Programs Biomed, 2018 Jul;161:133-143.
    PMID: 29852956 DOI: 10.1016/j.cmpb.2018.04.018
    Cardiovascular diseases (CVDs) are the leading cause of deaths worldwide. The rising mortality rate can be reduced by early detection and treatment interventions. Clinically, electrocardiogram (ECG) signal provides useful information about the cardiac abnormalities and hence employed as a diagnostic modality for the detection of various CVDs. However, subtle changes in these time series indicate a particular disease. Therefore, it may be monotonous, time-consuming and stressful to inspect these ECG beats manually. In order to overcome this limitation of manual ECG signal analysis, this paper uses a novel discrete wavelet transform (DWT) method combined with nonlinear features for automated characterization of CVDs. ECG signals of normal, and dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM) and myocardial infarction (MI) are subjected to five levels of DWT. Relative wavelet of four nonlinear features such as fuzzy entropy, sample entropy, fractal dimension and signal energy are extracted from the DWT coefficients. These features are fed to sequential forward selection (SFS) technique and then ranked using ReliefF method. Our proposed methodology achieved maximum classification accuracy (acc) of 99.27%, sensitivity (sen) of 99.74%, and specificity (spec) of 98.08% with K-nearest neighbor (kNN) classifier using 15 features ranked by the ReliefF method. Our proposed methodology can be used by clinical staff to make faster and accurate diagnosis of CVDs. Thus, the chances of survival can be significantly increased by early detection and treatment of CVDs.
    Matched MeSH terms: Pattern Recognition, Automated*
  16. Dawood F, Loo CK
    Int J Neural Syst, 2018 May;28(4):1750038.
    PMID: 29022403 DOI: 10.1142/S0129065717500381
    Imitation learning through self-exploration is essential in developing sensorimotor skills. Most developmental theories emphasize that social interactions, especially understanding of observed actions, could be first achieved through imitation, yet the discussion on the origin of primitive imitative abilities is often neglected, referring instead to the possibility of its innateness. This paper presents a developmental model of imitation learning based on the hypothesis that humanoid robot acquires imitative abilities as induced by sensorimotor associative learning through self-exploration. In designing such learning system, several key issues will be addressed: automatic segmentation of the observed actions into motion primitives using raw images acquired from the camera without requiring any kinematic model; incremental learning of spatio-temporal motion sequences to dynamically generates a topological structure in a self-stabilizing manner; organization of the learned data for easy and efficient retrieval using a dynamic associative memory; and utilizing segmented motion primitives to generate complex behavior by the combining these motion primitives. In our experiment, the self-posture is acquired through observing the image of its own body posture while performing the action in front of a mirror through body babbling. The complete architecture was evaluated by simulation and real robot experiments performed on DARwIn-OP humanoid robot.
    Matched MeSH terms: Pattern Recognition, Automated
  17. Mostafa SA, Mustapha A, Mohammed MA, Ahmad MS, Mahmoud MA
    Int J Med Inform, 2018 04;112:173-184.
    PMID: 29500017 DOI: 10.1016/j.ijmedinf.2018.02.001
    Autonomous agents are being widely used in many systems, such as ambient assisted-living systems, to perform tasks on behalf of humans. However, these systems usually operate in complex environments that entail uncertain, highly dynamic, or irregular workload. In such environments, autonomous agents tend to make decisions that lead to undesirable outcomes. In this paper, we propose a fuzzy-logic-based adjustable autonomy (FLAA) model to manage the autonomy of multi-agent systems that are operating in complex environments. This model aims to facilitate the autonomy management of agents and help them make competent autonomous decisions. The FLAA model employs fuzzy logic to quantitatively measure and distribute autonomy among several agents based on their performance. We implement and test this model in the Automated Elderly Movements Monitoring (AEMM-Care) system, which uses agents to monitor the daily movement activities of elderly users and perform fall detection and prevention tasks in a complex environment. The test results show that the FLAA model improves the accuracy and performance of these agents in detecting and preventing falls.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  18. Abdulhay E, Mohammed MA, Ibrahim DA, Arunkumar N, Venkatraman V
    J Med Syst, 2018 Feb 17;42(4):58.
    PMID: 29455440 DOI: 10.1007/s10916-018-0912-y
    Blood leucocytes segmentation in medical images is viewed as difficult process due to the variability of blood cells concerning their shape and size and the difficulty towards determining location of Blood Leucocytes. Physical analysis of blood tests to recognize leukocytes is tedious, time-consuming and liable to error because of the various morphological components of the cells. Segmentation of medical imagery has been considered as a difficult task because of complexity of images, and also due to the non-availability of leucocytes models which entirely captures the probable shapes in each structures and also incorporate cell overlapping, the expansive variety of the blood cells concerning their shape and size, various elements influencing the outer appearance of the blood leucocytes, and low Static Microscope Image disparity from extra issues outcoming about because of noise. We suggest a strategy towards segmentation of blood leucocytes using static microscope images which is a resultant of three prevailing systems of computer vision fiction: enhancing the image, Support vector machine for segmenting the image, and filtering out non ROI (region of interest) on the basis of Local binary patterns and texture features. Every one of these strategies are modified for blood leucocytes division issue, in this manner the subsequent techniques are very vigorous when compared with its individual segments. Eventually, we assess framework based by compare the outcome and manual division. The findings outcome from this study have shown a new approach that automatically segments the blood leucocytes and identify it from a static microscope images. Initially, the method uses a trainable segmentation procedure and trained support vector machine classifier to accurately identify the position of the ROI. After that, filtering out non ROI have proposed based on histogram analysis to avoid the non ROI and chose the right object. Finally, identify the blood leucocytes type using the texture feature. The performance of the foreseen approach has been tried in appearing differently in relation to the system against manual examination by a gynaecologist utilizing diverse scales. A total of 100 microscope images were used for the comparison, and the results showed that the proposed solution is a viable alternative to the manual segmentation method for accurately determining the ROI. We have evaluated the blood leucocytes identification using the ROI texture (LBP Feature). The identification accuracy in the technique used is about 95.3%., with 100 sensitivity and 91.66% specificity.
    Matched MeSH terms: Pattern Recognition, Automated/methods*
  19. Al-Saiagh W, Tiun S, Al-Saffar A, Awang S, Al-Khaleefa AS
    PLoS One, 2018;13(12):e0208695.
    PMID: 30571777 DOI: 10.1371/journal.pone.0208695
    Word sense disambiguation (WSD) is the process of identifying an appropriate sense for an ambiguous word. With the complexity of human languages in which a single word could yield different meanings, WSD has been utilized by several domains of interests such as search engines and machine translations. The literature shows a vast number of techniques used for the process of WSD. Recently, researchers have focused on the use of meta-heuristic approaches to identify the best solutions that reflect the best sense. However, the application of meta-heuristic approaches remains limited and thus requires the efficient exploration and exploitation of the problem space. Hence, the current study aims to propose a hybrid meta-heuristic method that consists of particle swarm optimization (PSO) and simulated annealing to find the global best meaning of a given text. Different semantic measures have been utilized in this model as objective functions for the proposed hybrid PSO. These measures consist of JCN and extended Lesk methods, which are combined effectively in this work. The proposed method is tested using a three-benchmark dataset (SemCor 3.0, SensEval-2, and SensEval-3). Results show that the proposed method has superior performance in comparison with state-of-the-art approaches.
    Matched MeSH terms: Pattern Recognition, Automated
  20. AlDahoul N, Md Sabri AQ, Mansoor AM
    Comput Intell Neurosci, 2018;2018:1639561.
    PMID: 29623089 DOI: 10.1155/2018/1639561
    Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN), pretrained CNN feature extractor, and hierarchical extreme learning machine) for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM). H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU), H-ELM's training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU).
    Matched MeSH terms: Pattern Recognition, Automated/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links