Displaying publications 81 - 100 of 120 in total

Abstract:
Sort:
  1. Sheikh Abdullah SN, Bohani FA, Nayef BH, Sahran S, Al Akash O, Iqbal Hussain R, et al.
    Comput Math Methods Med, 2016;2016:8603609.
    PMID: 27516807 DOI: 10.1155/2016/8603609
    Brain magnetic resonance imaging (MRI) classification into normal and abnormal is a critical and challenging task. Owing to that, several medical imaging classification techniques have been devised in which Learning Vector Quantization (LVQ) is amongst the potential. The main goal of this paper is to enhance the performance of LVQ technique in order to gain higher accuracy detection for brain tumor in MRIs. The classical way of selecting the winner code vector in LVQ is to measure the distance between the input vector and the codebook vectors using Euclidean distance function. In order to improve the winner selection technique, round off function is employed along with the Euclidean distance function. Moreover, in competitive learning classifiers, the fitting model is highly dependent on the class distribution. Therefore this paper proposed a multiresampling technique for which better class distribution can be achieved. This multiresampling is executed by using random selection via preclassification. The test data sample used are the brain tumor magnetic resonance images collected from Universiti Kebangsaan Malaysia Medical Center and UCI benchmark data sets. Comparative studies showed that the proposed methods with promising results are LVQ1, Multipass LVQ, Hierarchical LVQ, Multilayer Perceptron, and Radial Basis Function.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  2. Yousef Kalafi E, Tan WB, Town C, Dhillon SK
    BMC Bioinformatics, 2016 Dec 22;17(Suppl 19):511.
    PMID: 28155722 DOI: 10.1186/s12859-016-1376-z
    BACKGROUND: Monogeneans are flatworms (Platyhelminthes) that are primarily found on gills and skin of fishes. Monogenean parasites have attachment appendages at their haptoral regions that help them to move about the body surface and feed on skin and gill debris. Haptoral attachment organs consist of sclerotized hard parts such as hooks, anchors and marginal hooks. Monogenean species are differentiated based on their haptoral bars, anchors, marginal hooks, reproductive parts' (male and female copulatory organs) morphological characters and soft anatomical parts. The complex structure of these diagnostic organs and also their overlapping in microscopic digital images are impediments for developing fully automated identification system for monogeneans (LNCS 7666:256-263, 2012), (ISDA; 457-462, 2011), (J Zoolog Syst Evol Res 52(2): 95-99. 2013;). In this study images of hard parts of the haptoral organs such as bars and anchors are used to develop a fully automated identification technique for monogenean species identification by implementing image processing techniques and machine learning methods.

    RESULT: Images of four monogenean species namely Sinodiplectanotrema malayanus, Trianchoratus pahangensis, Metahaliotrema mizellei and Metahaliotrema sp. (undescribed) were used to develop an automated technique for identification. K-nearest neighbour (KNN) was applied to classify the monogenean specimens based on the extracted features. 50% of the dataset was used for training and the other 50% was used as testing for system evaluation. Our approach demonstrated overall classification accuracy of 90%. In this study Leave One Out (LOO) cross validation is used for validation of our system and the accuracy is 91.25%.

    CONCLUSIONS: The methods presented in this study facilitate fast and accurate fully automated classification of monogeneans at the species level. In future studies more classes will be included in the model, the time to capture the monogenean images will be reduced and improvements in extraction and selection of features will be implemented.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  3. Mosleh MA, Baba MS, Malek S, Almaktari RA
    BMC Bioinformatics, 2016 Dec 22;17(Suppl 19):499.
    PMID: 28155649 DOI: 10.1186/s12859-016-1370-5
    BACKGROUND: Cephalometric analysis and measurements of skull parameters using X-Ray images plays an important role in predicating and monitoring orthodontic treatment. Manual analysis and measurements of cephalometric is considered tedious, time consuming, and subjected to human errors. Several cephalometric systems have been developed to automate the cephalometric procedure; however, no clear insights have been reported about reliability, performance, and usability of those systems. This study utilizes some techniques to evaluate reliability, performance, and usability metric using SUS methods of the developed cephalometric system which has not been reported in previous studies.

    METHODS: In this study a novel system named Ceph-X is developed to computerize the manual tasks of orthodontics during cephalometric measurements. Ceph-X is developed by using image processing techniques with three main models: enhancements X-ray image model, locating landmark model, and computation model. Ceph-X was then evaluated by using X-ray images of 30 subjects (male and female) obtained from University of Malaya hospital. Three orthodontics specialists were involved in the evaluation of accuracy to avoid intra examiner error, and performance for Ceph-X, and 20 orthodontics specialists were involved in the evaluation of the usability, and user satisfaction for Ceph-X by using the SUS approach.

    RESULTS: Statistical analysis for the comparison between the manual and automatic cephalometric approaches showed that Ceph-X achieved a great accuracy approximately 96.6%, with an acceptable errors variation approximately less than 0.5 mm, and 1°. Results showed that Ceph-X increased the specialist performance, and minimized the processing time to obtain cephalometric measurements of human skull. Furthermore, SUS analysis approach showed that Ceph-X has an excellent usability user's feedback.

    CONCLUSIONS: The Ceph-X has proved its reliability, performance, and usability to be used by orthodontists for the analysis, diagnosis, and treatment of cephalometric.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  4. Abu A, Leow LK, Ramli R, Omar H
    BMC Bioinformatics, 2016 Dec 22;17(Suppl 19):505.
    PMID: 28155645 DOI: 10.1186/s12859-016-1362-5
    BACKGROUND: Taxonomists frequently identify specimen from various populations based on the morphological characteristics and molecular data. This study looks into another invasive process in identification of house shrew (Suncus murinus) using image analysis and machine learning approaches. Thus, an automated identification system is developed to assist and simplify this task. In this study, seven descriptors namely area, convex area, major axis length, minor axis length, perimeter, equivalent diameter and extent which are based on the shape are used as features to represent digital image of skull that consists of dorsal, lateral and jaw views for each specimen. An Artificial Neural Network (ANN) is used as classifier to classify the skulls of S. murinus based on region (northern and southern populations of Peninsular Malaysia) and sex (adult male and female). Thus, specimen classification using Training data set and identification using Testing data set were performed through two stages of ANNs.

    RESULTS: At present, the classifier used has achieved an accuracy of 100% based on skulls' views. Classification and identification to regions and sexes have also attained 72.5%, 87.5% and 80.0% of accuracy for dorsal, lateral, and jaw views, respectively. This results show that the shape characteristic features used are substantial because they can differentiate the specimens based on regions and sexes up to the accuracy of 80% and above. Finally, an application was developed and can be used for the scientific community.

    CONCLUSIONS: This automated system demonstrates the practicability of using computer-assisted systems in providing interesting alternative approach for quick and easy identification of unknown species.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  5. Ahmadi H, Gholamzadeh M, Shahmoradi L, Nilashi M, Rashvand P
    Comput Methods Programs Biomed, 2018 Jul;161:145-172.
    PMID: 29852957 DOI: 10.1016/j.cmpb.2018.04.013
    BACKGROUND AND OBJECTIVE: Diagnosis as the initial step of medical practice, is one of the most important parts of complicated clinical decision making which is usually accompanied with the degree of ambiguity and uncertainty. Since uncertainty is the inseparable nature of medicine, fuzzy logic methods have been used as one of the best methods to decrease this ambiguity. Recently, several kinds of literature have been published related to fuzzy logic methods in a wide range of medical aspects in terms of diagnosis. However, in this context there are a few review articles that have been published which belong to almost ten years ago. Hence, we conducted a systematic review to determine the contribution of utilizing fuzzy logic methods in disease diagnosis in different medical practices.

    METHODS: Eight scientific databases are selected as an appropriate database and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method was employed as the basis method for conducting this systematic and meta-analysis review. Regarding the main objective of this research, some inclusion and exclusion criteria were considered to limit our investigation. To achieve a structured meta-analysis, all eligible articles were classified based on authors, publication year, journals or conferences, applied fuzzy methods, main objectives of the research, problems and research gaps, tools utilized to model the fuzzy system, medical disciplines, sample sizes, the inputs and outputs of the system, findings, results and finally the impact of applied fuzzy methods to improve diagnosis. Then, we analyzed the results obtained from these classifications to indicate the effect of fuzzy methods in decreasing the complexity of diagnosis.

    RESULTS: Consequently, the result of this study approved the effectiveness of applying different fuzzy methods in diseases diagnosis process, presenting new insights for researchers about what kind of diseases which have been more focused. This will help to determine the diagnostic aspects of medical disciplines that are being neglected.

    CONCLUSIONS: Overall, this systematic review provides an appropriate platform for further research by identifying the research needs in the domain of disease diagnosis.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  6. AlDahoul N, Md Sabri AQ, Mansoor AM
    Comput Intell Neurosci, 2018;2018:1639561.
    PMID: 29623089 DOI: 10.1155/2018/1639561
    Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN), pretrained CNN feature extractor, and hierarchical extreme learning machine) for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM). H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU), H-ELM's training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU).
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  7. Kahaki SMM, Arshad H, Nordin MJ, Ismail W
    PLoS One, 2018;13(7):e0200676.
    PMID: 30024921 DOI: 10.1371/journal.pone.0200676
    Image registration of remotely sensed imagery is challenging, as complex deformations are common. Different deformations, such as affine and homogenous transformation, combined with multimodal data capturing can emerge in the data acquisition process. These effects, when combined, tend to compromise the performance of the currently available registration methods. A new image transform, known as geometric mean projection transform, is introduced in this work. As it is deformation invariant, it can be employed as a feature descriptor, whereby it analyzes the functions of all vertical and horizontal signals in local areas of the image. Moreover, an invariant feature correspondence method is proposed as a point matching algorithm, which incorporates new descriptor's dissimilarity metric. Considering the image as a signal, the proposed approach utilizes a square Eigenvector correlation (SEC) based on the Eigenvector properties. In our experiments on standard test images sourced from "Featurespace" and "IKONOS" datasets, the proposed method achieved higher average accuracy relative to that obtained from other state of the art image registration techniques. The accuracy of the proposed method was assessed using six standard evaluation metrics. Furthermore, statistical analyses, including t-test and Friedman test, demonstrate that the method developed as a part of this study is superior to the existing methods.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  8. Ibrahim RW, Hasan AM, Jalab HA
    Comput Methods Programs Biomed, 2018 Sep;163:21-28.
    PMID: 30119853 DOI: 10.1016/j.cmpb.2018.05.031
    BACKGROUND AND OBJECTIVES: The MRI brain tumors segmentation is challenging due to variations in terms of size, shape, location and features' intensity of the tumor. Active contour has been applied in MRI scan image segmentation due to its ability to produce regions with boundaries. The main difficulty that encounters the active contour segmentation is the boundary tracking which is controlled by minimization of energy function for segmentation. Hence, this study proposes a novel fractional Wright function (FWF) as a minimization of energy technique to improve the performance of active contour without edge method.

    METHOD: In this study, we implement FWF as an energy minimization function to replace the standard gradient-descent method as minimization function in Chan-Vese segmentation technique. The proposed FWF is used to find the boundaries of an object by controlling the inside and outside values of the contour. In this study, the objective evaluation is used to distinguish the differences between the processed segmented images and ground truth using a set of statistical parameters; true positive, true negative, false positive, and false negative.

    RESULTS: The FWF as a minimization of energy was successfully implemented on BRATS 2013 image dataset. The achieved overall average sensitivity score of the brain tumors segmentation was 94.8 ± 4.7%.

    CONCLUSIONS: The results demonstrate that the proposed FWF method minimized the energy function more than the gradient-decent method that was used in the original three-dimensional active contour without edge (3DACWE) method.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  9. Iqbal U, Wah TY, Habib Ur Rehman M, Mujtaba G, Imran M, Shoaib M
    J Med Syst, 2018 Nov 05;42(12):252.
    PMID: 30397730 DOI: 10.1007/s10916-018-1107-2
    Electrocardiography (ECG) sensors play a vital role in the Internet of Medical Things, and these sensors help in monitoring the electrical activity of the heart. ECG signal analysis can improve human life in many ways, from diagnosing diseases among cardiac patients to managing the lifestyles of diabetic patients. Abnormalities in heart activities lead to different cardiac diseases and arrhythmia. However, some cardiac diseases, such as myocardial infarction (MI) and atrial fibrillation (Af), require special attention due to their direct impact on human life. The classification of flattened T wave cases of MI in ECG signals and how much of these cases are similar to ST-T changes in MI remain an open issue for researchers. This article presents a novel contribution to classify MI and Af. To this end, we propose a new approach called deep deterministic learning (DDL), which works by combining predefined heart activities with fused datasets. In this research, we used two datasets. The first dataset, Massachusetts Institute of Technology-Beth Israel Hospital, is publicly available, and we exclusively obtained the second dataset from the University of Malaya Medical Center, Kuala Lumpur Malaysia. We first initiated predefined activities on each individual dataset to recognize patterns between the ST-T change and flattened T wave cases and then used the data fusion approach to merge both datasets in a manner that delivers the most accurate pattern recognition results. The proposed DDL approach is a systematic stage-wise methodology that relies on accurate detection of R peaks in ECG signals, time domain features of ECG signals, and fine tune-up of artificial neural networks. The empirical evaluation shows high accuracy (i.e., ≤99.97%) in pattern matching ST-T changes and flattened T waves using the proposed DDL approach. The proposed pattern recognition approach is a significant contribution to the diagnosis of special cases of MI.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  10. Tiong KH, Chang JK, Pathmanathan D, Hidayatullah Fadlullah MZ, Yee PS, Liew CS, et al.
    Biotechniques, 2018 12;65(6):322-330.
    PMID: 30477327 DOI: 10.2144/btn-2018-0072
    We describe a novel automated cell detection and counting software, QuickCount® (QC), designed for rapid quantification of cells. The Bland-Altman plot and intraclass correlation coefficient (ICC) analyses demonstrated strong agreement between cell counts from QC to manual counts (mean and SD: -3.3 ± 4.5; ICC = 0.95). QC has higher recall in comparison to ImageJauto, CellProfiler and CellC and the precision of QC, ImageJauto, CellProfiler and CellC are high and comparable. QC can precisely delineate and count single cells from images of different cell densities with precision and recall above 0.9. QC is unique as it is equipped with real-time preview while optimizing the parameters for accurate cell count and needs minimum hands-on time where hundreds of images can be analyzed automatically in a matter of milliseconds. In conclusion, QC offers a rapid, accurate and versatile solution for large-scale cell quantification and addresses the challenges often faced in cell biology research.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  11. Acharya UR, Faust O, Ciaccio EJ, Koh JEW, Oh SL, Tan RS, et al.
    Comput Methods Programs Biomed, 2019 Jul;175:163-178.
    PMID: 31104705 DOI: 10.1016/j.cmpb.2019.04.018
    BACKGROUND AND OBJECTIVE: Complex fractionated atrial electrograms (CFAE) may contain information concerning the electrophysiological substrate of atrial fibrillation (AF); therefore they are of interest to guide catheter ablation treatment of AF. Electrogram signals are shaped by activation events, which are dynamical in nature. This makes it difficult to establish those signal properties that can provide insight into the ablation site location. Nonlinear measures may improve information. To test this hypothesis, we used nonlinear measures to analyze CFAE.

    METHODS: CFAE from several atrial sites, recorded for a duration of 16 s, were acquired from 10 patients with persistent and 9 patients with paroxysmal AF. These signals were appraised using non-overlapping windows of 1-, 2- and 4-s durations. The resulting data sets were analyzed with Recurrence Plots (RP) and Recurrence Quantification Analysis (RQA). The data was also quantified via entropy measures.

    RESULTS: RQA exhibited unique plots for persistent versus paroxysmal AF. Similar patterns were observed to be repeated throughout the RPs. Trends were consistent for signal segments of 1 and 2 s as well as 4 s in duration. This was suggestive that the underlying signal generation process is also repetitive, and that repetitiveness can be detected even in 1-s sequences. The results also showed that most entropy metrics exhibited higher measurement values (closer to equilibrium) for persistent AF data. It was also found that Determinism (DET), Trapping Time (TT), and Modified Multiscale Entropy (MMSE), extracted from signals that were acquired from locations at the posterior atrial free wall, are highly discriminative of persistent versus paroxysmal AF data.

    CONCLUSIONS: Short data sequences are sufficient to provide information to discern persistent versus paroxysmal AF data with a significant difference, and can be useful to detect repeating patterns of atrial activation.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  12. Bilal M, Anis H, Khan N, Qureshi I, Shah J, Kadir KA
    Biomed Res Int, 2019;2019:6139785.
    PMID: 31119178 DOI: 10.1155/2019/6139785
    Background: Motion is a major source of blurring and ghosting in recovered MR images. It is more challenging in Dynamic Contrast Enhancement (DCE) MRI because motion effects and rapid intensity changes in contrast agent are difficult to distinguish from each other.

    Material and Methods: In this study, we have introduced a new technique to reduce the motion artifacts, based on data binning and low rank plus sparse (L+S) reconstruction method for DCE MRI. For Data binning, radial k-space data is acquired continuously using the golden-angle radial sampling pattern and grouped into various motion states or bins. The respiratory signal for binning is extracted directly from radially acquired k-space data. A compressed sensing- (CS-) based L+S matrix decomposition model is then used to reconstruct motion sorted DCE MR images. Undersampled free breathing 3D liver and abdominal DCE MR data sets are used to validate the proposed technique.

    Results: The performance of the technique is compared with conventional L+S decomposition qualitatively along with the image sharpness and structural similarity index. Recovered images are visually sharper and have better similarity with reference images.

    Conclusion: L+S decomposition provides improved MR images with data binning as preprocessing step in free breathing scenario. Data binning resolves the respiratory motion by dividing different respiratory positions in multiple bins. It also differentiates the respiratory motion and contrast agent (CA) variations. MR images recovered for each bin are better as compared to the method without data binning.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  13. Rajagopal H, Mokhtar N, Tengku Mohmed Noor Izam TF, Wan Ahmad WK
    PLoS One, 2020;15(5):e0233320.
    PMID: 32428043 DOI: 10.1371/journal.pone.0233320
    Image Quality Assessment (IQA) is essential for the accuracy of systems for automatic recognition of tree species for wood samples. In this study, a No-Reference IQA (NR-IQA), wood NR-IQA (WNR-IQA) metric was proposed to assess the quality of wood images. Support Vector Regression (SVR) was trained using Generalized Gaussian Distribution (GGD) and Asymmetric Generalized Gaussian Distribution (AGGD) features, which were measured for wood images. Meanwhile, the Mean Opinion Score (MOS) was obtained from the subjective evaluation. This was followed by a comparison between the proposed IQA metric, WNR-IQA, and three established NR-IQA metrics, namely Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), deepIQA, Deep Bilinear Convolutional Neural Networks (DB-CNN), and five Full Reference-IQA (FR-IQA) metrics known as MSSIM, SSIM, FSIM, IWSSIM, and GMSD. The proposed WNR-IQA metric, BRISQUE, deepIQA, DB-CNN, and FR-IQAs were then compared with MOS values to evaluate the performance of the automatic IQA metrics. As a result, the WNR-IQA metric exhibited a higher performance compared to BRISQUE, deepIQA, DB-CNN, and FR-IQA metrics. Highest quality images may not be routinely available due to logistic factors, such as dust, poor illumination, and hot environment present in the timber industry. Moreover, motion blur could occur due to the relative motion between the camera and the wood slice. Therefore, the advantage of WNR-IQA could be seen from its independency from a "perfect" reference image for the image quality evaluation.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  14. Ab Hamid F, Che Azemin MZ, Salam A, Aminuddin A, Mohd Daud N, Zahari I
    Curr Eye Res, 2016 Jun;41(6):823-31.
    PMID: 26268475 DOI: 10.3109/02713683.2015.1056375
    PURPOSE: The goal of this study was to provide the empirical evidence of fractal dimension as an indirect measure of retinal vasculature density.

    MATERIALS AND METHODS: Two hundred retinal samples of right eye [57.0% females (n = 114) and 43.0% males (n = 86)] were selected from baseline visit. A custom-written software was used for vessel segmentation. Vessel segmentation is the process of transforming two-dimensional color images into binary images (i.e. black and white pixels). The circular area of approximately 2.6 optic disc radii surrounding the center of optic disc was cropped. The non-vessels fragments were removed. FracLac was used to measure the fractal dimension and vessel density of retinal vessels.

    RESULTS: This study suggested that 14.1% of the region of interest (i.e. approximately 2.6 optic disk radii) comprised retinal vessel structure. Using correlation analysis, vessel density measurement and fractal dimension estimation are linearly and strongly correlated (R = 0.942, R(2) = 0.89, p 

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  15. Than JCM, Saba L, Noor NM, Rijal OM, Kassim RM, Yunus A, et al.
    Comput Biol Med, 2017 10 01;89:197-211.
    PMID: 28825994 DOI: 10.1016/j.compbiomed.2017.08.014
    Lung disease risk stratification is important for both diagnosis and treatment planning, particularly in biopsies and radiation therapy. Manual lung disease risk stratification is challenging because of: (a) large lung data sizes, (b) inter- and intra-observer variability of the lung delineation and (c) lack of feature amalgamation during machine learning paradigm. This paper presents a two stage CADx cascaded system consisting of: (a) semi-automated lung delineation subsystem (LDS) for lung region extraction in CT slices followed by (b) morphology-based lung tissue characterization, thereby addressing the above shortcomings. LDS primarily uses entropy-based region extraction while ML-based lung characterization is mainly based on an amalgamation of directional transforms such as Riesz and Gabor along with texture-based features comprising of 100 greyscale features using the K-fold cross-validation protocol (K = 2, 3, 5 and 10). The lung database consisted of 96 patients: 15 normal and 81 diseased. We use five high resolution Computed Tomography (HRCT) levels representing different anatomy landmarks where disease is commonly seen. We demonstrate the amalgamated ML stratification accuracy of 99.53%, an increase of 2% against the conventional non-amalgamation ML system that uses alone Riesz-based feature embedded with feature selection based on feature strength. The robustness of the system was determined based on the reliability and stability that showed a reliability index of 0.99 and the deviation in risk stratification accuracies less than 5%. Our CADx system shows 10% better performance when compared against the mean of five other prominent studies available in the current literature covering over one decade.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  16. Raghavendra U, Gudigar A, Maithri M, Gertych A, Meiburger KM, Yeong CH, et al.
    Comput Biol Med, 2018 04 01;95:55-62.
    PMID: 29455080 DOI: 10.1016/j.compbiomed.2018.02.002
    Ultrasound imaging is one of the most common visualizing tools used by radiologists to identify the location of thyroid nodules. However, visual assessment of nodules is difficult and often affected by inter- and intra-observer variabilities. Thus, a computer-aided diagnosis (CAD) system can be helpful to cross-verify the severity of nodules. This paper proposes a new CAD system to characterize thyroid nodules using optimized multi-level elongated quinary patterns. In this study, higher order spectral (HOS) entropy features extracted from these patterns appropriately distinguished benign and malignant nodules under particle swarm optimization (PSO) and support vector machine (SVM) frameworks. Our CAD algorithm achieved a maximum accuracy of 97.71% and 97.01% in private and public datasets respectively. The evaluation of this CAD system on both private and public datasets confirmed its effectiveness as a secondary tool in assisting radiological findings.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  17. Abdulhay E, Mohammed MA, Ibrahim DA, Arunkumar N, Venkatraman V
    J Med Syst, 2018 Feb 17;42(4):58.
    PMID: 29455440 DOI: 10.1007/s10916-018-0912-y
    Blood leucocytes segmentation in medical images is viewed as difficult process due to the variability of blood cells concerning their shape and size and the difficulty towards determining location of Blood Leucocytes. Physical analysis of blood tests to recognize leukocytes is tedious, time-consuming and liable to error because of the various morphological components of the cells. Segmentation of medical imagery has been considered as a difficult task because of complexity of images, and also due to the non-availability of leucocytes models which entirely captures the probable shapes in each structures and also incorporate cell overlapping, the expansive variety of the blood cells concerning their shape and size, various elements influencing the outer appearance of the blood leucocytes, and low Static Microscope Image disparity from extra issues outcoming about because of noise. We suggest a strategy towards segmentation of blood leucocytes using static microscope images which is a resultant of three prevailing systems of computer vision fiction: enhancing the image, Support vector machine for segmenting the image, and filtering out non ROI (region of interest) on the basis of Local binary patterns and texture features. Every one of these strategies are modified for blood leucocytes division issue, in this manner the subsequent techniques are very vigorous when compared with its individual segments. Eventually, we assess framework based by compare the outcome and manual division. The findings outcome from this study have shown a new approach that automatically segments the blood leucocytes and identify it from a static microscope images. Initially, the method uses a trainable segmentation procedure and trained support vector machine classifier to accurately identify the position of the ROI. After that, filtering out non ROI have proposed based on histogram analysis to avoid the non ROI and chose the right object. Finally, identify the blood leucocytes type using the texture feature. The performance of the foreseen approach has been tried in appearing differently in relation to the system against manual examination by a gynaecologist utilizing diverse scales. A total of 100 microscope images were used for the comparison, and the results showed that the proposed solution is a viable alternative to the manual segmentation method for accurately determining the ROI. We have evaluated the blood leucocytes identification using the ROI texture (LBP Feature). The identification accuracy in the technique used is about 95.3%., with 100 sensitivity and 91.66% specificity.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  18. Oung QW, Muthusamy H, Basah SN, Lee H, Vijean V
    J Med Syst, 2017 Dec 29;42(2):29.
    PMID: 29288342 DOI: 10.1007/s10916-017-0877-2
    Parkinson's disease (PD) is a type of progressive neurodegenerative disorder that has affected a large part of the population till now. Several symptoms of PD include tremor, rigidity, slowness of movements and vocal impairments. In order to develop an effective diagnostic system, a number of algorithms were proposed mainly to distinguish healthy individuals from the ones with PD. However, most of the previous works were conducted based on a binary classification, with the early PD stage and the advanced ones being treated equally. Therefore, in this work, we propose a multiclass classification with three classes of PD severity level (mild, moderate, severe) and healthy control. The focus is to detect and classify PD using signals from wearable motion and audio sensors based on both empirical wavelet transform (EWT) and empirical wavelet packet transform (EWPT) respectively. The EWT/EWPT was applied to decompose both speech and motion data signals up to five levels. Next, several features are extracted after obtaining the instantaneous amplitudes and frequencies from the coefficients of the decomposed signals by applying the Hilbert transform. The performance of the algorithm was analysed using three classifiers - K-nearest neighbour (KNN), probabilistic neural network (PNN) and extreme learning machine (ELM). Experimental results demonstrated that our proposed approach had the ability to differentiate PD from non-PD subjects, including their severity level - with classification accuracies of more than 90% using EWT/EWPT-ELM based on signals from motion and audio sensors respectively. Additionally, classification accuracy of more than 95% was achieved when EWT/EWPT-ELM is applied to signals from integration of both signal's information.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  19. Kamarudin ND, Ooi CY, Kawanabe T, Odaguchi H, Kobayashi F
    J Healthc Eng, 2017;2017:7460168.
    PMID: 29065640 DOI: 10.1155/2017/7460168
    In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye's ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue's multicolour classification based on a support vector machine (SVM) whose support vectors are reduced by our proposed k-means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k-means clustering is used to cluster a tongue image into four clusters of image background (black), deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  20. Mehdy MM, Ng PY, Shair EF, Saleh NIM, Gomes C
    Comput Math Methods Med, 2017;2017:2610628.
    PMID: 28473865 DOI: 10.1155/2017/2610628
    Medical imaging techniques have widely been in use in the diagnosis and detection of breast cancer. The drawback of applying these techniques is the large time consumption in the manual diagnosis of each image pattern by a professional radiologist. Automated classifiers could substantially upgrade the diagnosis process, in terms of both accuracy and time requirement by distinguishing benign and malignant patterns automatically. Neural network (NN) plays an important role in this respect, especially in the application of breast cancer detection. Despite the large number of publications that describe the utilization of NN in various medical techniques, only a few reviews are available that guide the development of these algorithms to enhance the detection techniques with respect to specificity and sensitivity. The purpose of this review is to analyze the contents of recently published literature with special attention to techniques and states of the art of NN in medical imaging. We discuss the usage of NN in four different medical imaging applications to show that NN is not restricted to few areas of medicine. Types of NN used, along with the various types of feeding data, have been reviewed. We also address hybrid NN adaptation in breast cancer detection.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links