Displaying publications 81 - 100 of 113 in total

Abstract:
Sort:
  1. Raghavendra U, Gudigar A, Maithri M, Gertych A, Meiburger KM, Yeong CH, et al.
    Comput Biol Med, 2018 04 01;95:55-62.
    PMID: 29455080 DOI: 10.1016/j.compbiomed.2018.02.002
    Ultrasound imaging is one of the most common visualizing tools used by radiologists to identify the location of thyroid nodules. However, visual assessment of nodules is difficult and often affected by inter- and intra-observer variabilities. Thus, a computer-aided diagnosis (CAD) system can be helpful to cross-verify the severity of nodules. This paper proposes a new CAD system to characterize thyroid nodules using optimized multi-level elongated quinary patterns. In this study, higher order spectral (HOS) entropy features extracted from these patterns appropriately distinguished benign and malignant nodules under particle swarm optimization (PSO) and support vector machine (SVM) frameworks. Our CAD algorithm achieved a maximum accuracy of 97.71% and 97.01% in private and public datasets respectively. The evaluation of this CAD system on both private and public datasets confirmed its effectiveness as a secondary tool in assisting radiological findings.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  2. Ahmadi H, Gholamzadeh M, Shahmoradi L, Nilashi M, Rashvand P
    Comput Methods Programs Biomed, 2018 Jul;161:145-172.
    PMID: 29852957 DOI: 10.1016/j.cmpb.2018.04.013
    BACKGROUND AND OBJECTIVE: Diagnosis as the initial step of medical practice, is one of the most important parts of complicated clinical decision making which is usually accompanied with the degree of ambiguity and uncertainty. Since uncertainty is the inseparable nature of medicine, fuzzy logic methods have been used as one of the best methods to decrease this ambiguity. Recently, several kinds of literature have been published related to fuzzy logic methods in a wide range of medical aspects in terms of diagnosis. However, in this context there are a few review articles that have been published which belong to almost ten years ago. Hence, we conducted a systematic review to determine the contribution of utilizing fuzzy logic methods in disease diagnosis in different medical practices.

    METHODS: Eight scientific databases are selected as an appropriate database and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method was employed as the basis method for conducting this systematic and meta-analysis review. Regarding the main objective of this research, some inclusion and exclusion criteria were considered to limit our investigation. To achieve a structured meta-analysis, all eligible articles were classified based on authors, publication year, journals or conferences, applied fuzzy methods, main objectives of the research, problems and research gaps, tools utilized to model the fuzzy system, medical disciplines, sample sizes, the inputs and outputs of the system, findings, results and finally the impact of applied fuzzy methods to improve diagnosis. Then, we analyzed the results obtained from these classifications to indicate the effect of fuzzy methods in decreasing the complexity of diagnosis.

    RESULTS: Consequently, the result of this study approved the effectiveness of applying different fuzzy methods in diseases diagnosis process, presenting new insights for researchers about what kind of diseases which have been more focused. This will help to determine the diagnostic aspects of medical disciplines that are being neglected.

    CONCLUSIONS: Overall, this systematic review provides an appropriate platform for further research by identifying the research needs in the domain of disease diagnosis.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  3. Ibrahim RW, Hasan AM, Jalab HA
    Comput Methods Programs Biomed, 2018 Sep;163:21-28.
    PMID: 30119853 DOI: 10.1016/j.cmpb.2018.05.031
    BACKGROUND AND OBJECTIVES: The MRI brain tumors segmentation is challenging due to variations in terms of size, shape, location and features' intensity of the tumor. Active contour has been applied in MRI scan image segmentation due to its ability to produce regions with boundaries. The main difficulty that encounters the active contour segmentation is the boundary tracking which is controlled by minimization of energy function for segmentation. Hence, this study proposes a novel fractional Wright function (FWF) as a minimization of energy technique to improve the performance of active contour without edge method.

    METHOD: In this study, we implement FWF as an energy minimization function to replace the standard gradient-descent method as minimization function in Chan-Vese segmentation technique. The proposed FWF is used to find the boundaries of an object by controlling the inside and outside values of the contour. In this study, the objective evaluation is used to distinguish the differences between the processed segmented images and ground truth using a set of statistical parameters; true positive, true negative, false positive, and false negative.

    RESULTS: The FWF as a minimization of energy was successfully implemented on BRATS 2013 image dataset. The achieved overall average sensitivity score of the brain tumors segmentation was 94.8 ± 4.7%.

    CONCLUSIONS: The results demonstrate that the proposed FWF method minimized the energy function more than the gradient-decent method that was used in the original three-dimensional active contour without edge (3DACWE) method.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  4. Acharya UR, Raghavendra U, Koh JEW, Meiburger KM, Ciaccio EJ, Hagiwara Y, et al.
    Comput Methods Programs Biomed, 2018 Nov;166:91-98.
    PMID: 30415722 DOI: 10.1016/j.cmpb.2018.10.006
    BACKGROUND AND OBJECTIVE: Liver fibrosis is a type of chronic liver injury that is characterized by an excessive deposition of extracellular matrix protein. Early detection of liver fibrosis may prevent further growth toward liver cirrhosis and hepatocellular carcinoma. In the past, the only method to assess liver fibrosis was through biopsy, but this examination is invasive, expensive, prone to sampling errors, and may cause complications such as bleeding. Ultrasound-based elastography is a promising tool to measure tissue elasticity in real time; however, this technology requires an upgrade of the ultrasound system and software. In this study, a novel computer-aided diagnosis tool is proposed to automatically detect and classify the various stages of liver fibrosis based upon conventional B-mode ultrasound images.

    METHODS: The proposed method uses a 2D contourlet transform and a set of texture features that are efficiently extracted from the transformed image. Then, the combination of a kernel discriminant analysis (KDA)-based feature reduction technique and analysis of variance (ANOVA)-based feature ranking technique was used, and the images were then classified into various stages of liver fibrosis.

    RESULTS: Our 2D contourlet transform and texture feature analysis approach achieved a 91.46% accuracy using only four features input to the probabilistic neural network classifier, to classify the five stages of liver fibrosis. It also achieved a 92.16% sensitivity and 88.92% specificity for the same model. The evaluation was done on a database of 762 ultrasound images belonging to five different stages of liver fibrosis.

    CONCLUSIONS: The findings suggest that the proposed method can be useful to automatically detect and classify liver fibrosis, which would greatly assist clinicians in making an accurate diagnosis.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  5. Iqbal U, Wah TY, Habib Ur Rehman M, Mujtaba G, Imran M, Shoaib M
    J Med Syst, 2018 Nov 05;42(12):252.
    PMID: 30397730 DOI: 10.1007/s10916-018-1107-2
    Electrocardiography (ECG) sensors play a vital role in the Internet of Medical Things, and these sensors help in monitoring the electrical activity of the heart. ECG signal analysis can improve human life in many ways, from diagnosing diseases among cardiac patients to managing the lifestyles of diabetic patients. Abnormalities in heart activities lead to different cardiac diseases and arrhythmia. However, some cardiac diseases, such as myocardial infarction (MI) and atrial fibrillation (Af), require special attention due to their direct impact on human life. The classification of flattened T wave cases of MI in ECG signals and how much of these cases are similar to ST-T changes in MI remain an open issue for researchers. This article presents a novel contribution to classify MI and Af. To this end, we propose a new approach called deep deterministic learning (DDL), which works by combining predefined heart activities with fused datasets. In this research, we used two datasets. The first dataset, Massachusetts Institute of Technology-Beth Israel Hospital, is publicly available, and we exclusively obtained the second dataset from the University of Malaya Medical Center, Kuala Lumpur Malaysia. We first initiated predefined activities on each individual dataset to recognize patterns between the ST-T change and flattened T wave cases and then used the data fusion approach to merge both datasets in a manner that delivers the most accurate pattern recognition results. The proposed DDL approach is a systematic stage-wise methodology that relies on accurate detection of R peaks in ECG signals, time domain features of ECG signals, and fine tune-up of artificial neural networks. The empirical evaluation shows high accuracy (i.e., ≤99.97%) in pattern matching ST-T changes and flattened T waves using the proposed DDL approach. The proposed pattern recognition approach is a significant contribution to the diagnosis of special cases of MI.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  6. Tiong KH, Chang JK, Pathmanathan D, Hidayatullah Fadlullah MZ, Yee PS, Liew CS, et al.
    Biotechniques, 2018 12;65(6):322-330.
    PMID: 30477327 DOI: 10.2144/btn-2018-0072
    We describe a novel automated cell detection and counting software, QuickCount® (QC), designed for rapid quantification of cells. The Bland-Altman plot and intraclass correlation coefficient (ICC) analyses demonstrated strong agreement between cell counts from QC to manual counts (mean and SD: -3.3 ± 4.5; ICC = 0.95). QC has higher recall in comparison to ImageJauto, CellProfiler and CellC and the precision of QC, ImageJauto, CellProfiler and CellC are high and comparable. QC can precisely delineate and count single cells from images of different cell densities with precision and recall above 0.9. QC is unique as it is equipped with real-time preview while optimizing the parameters for accurate cell count and needs minimum hands-on time where hundreds of images can be analyzed automatically in a matter of milliseconds. In conclusion, QC offers a rapid, accurate and versatile solution for large-scale cell quantification and addresses the challenges often faced in cell biology research.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  7. Niazi MKK, Abas FS, Senaras C, Pennell M, Sahiner B, Chen W, et al.
    PLoS One, 2018;13(5):e0196547.
    PMID: 29746503 DOI: 10.1371/journal.pone.0196547
    Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  8. AlDahoul N, Md Sabri AQ, Mansoor AM
    Comput Intell Neurosci, 2018;2018:1639561.
    PMID: 29623089 DOI: 10.1155/2018/1639561
    Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN), pretrained CNN feature extractor, and hierarchical extreme learning machine) for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM). H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU), H-ELM's training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU).
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  9. Kahaki SMM, Arshad H, Nordin MJ, Ismail W
    PLoS One, 2018;13(7):e0200676.
    PMID: 30024921 DOI: 10.1371/journal.pone.0200676
    Image registration of remotely sensed imagery is challenging, as complex deformations are common. Different deformations, such as affine and homogenous transformation, combined with multimodal data capturing can emerge in the data acquisition process. These effects, when combined, tend to compromise the performance of the currently available registration methods. A new image transform, known as geometric mean projection transform, is introduced in this work. As it is deformation invariant, it can be employed as a feature descriptor, whereby it analyzes the functions of all vertical and horizontal signals in local areas of the image. Moreover, an invariant feature correspondence method is proposed as a point matching algorithm, which incorporates new descriptor's dissimilarity metric. Considering the image as a signal, the proposed approach utilizes a square Eigenvector correlation (SEC) based on the Eigenvector properties. In our experiments on standard test images sourced from "Featurespace" and "IKONOS" datasets, the proposed method achieved higher average accuracy relative to that obtained from other state of the art image registration techniques. The accuracy of the proposed method was assessed using six standard evaluation metrics. Furthermore, statistical analyses, including t-test and Friedman test, demonstrate that the method developed as a part of this study is superior to the existing methods.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  10. Khalid A, Lim E, Chan BT, Abdul Aziz YF, Chee KH, Yap HJ, et al.
    J Magn Reson Imaging, 2019 04;49(4):1006-1019.
    PMID: 30211445 DOI: 10.1002/jmri.26302
    BACKGROUND: Existing clinical diagnostic and assessment methods could be improved to facilitate early detection and treatment of cardiac dysfunction associated with acute myocardial infarction (AMI) to reduce morbidity and mortality.

    PURPOSE: To develop 3D personalized left ventricular (LV) models and thickening assessment framework for assessing regional wall thickening dysfunction and dyssynchrony in AMI patients.

    STUDY TYPE: Retrospective study, diagnostic accuracy.

    SUBJECTS: Forty-four subjects consisting of 15 healthy subjects and 29 AMI patients.

    FIELD STRENGTH/SEQUENCE: 1.5T/steady-state free precession cine MRI scans; LGE MRI scans.

    ASSESSMENT: Quantitative thickening measurements across all cardiac phases were correlated and validated against clinical evaluation of infarct transmurality by an experienced cardiac radiologist based on the American Heart Association (AHA) 17-segment model.

    STATISTICAL TEST: Nonparametric 2-k related sample-based Kruskal-Wallis test; Mann-Whitney U-test; Pearson's correlation coefficient.

    RESULTS: Healthy LV wall segments undergo significant wall thickening (P 50% transmurality) underwent remarkable wall thinning during contraction (thickening index [TI] = 1.46 ± 0.26 mm) as opposed to healthy myocardium (TI = 4.01 ± 1.04 mm). For AMI patients, LV that showed signs of thinning were found to be associated with a significantly higher percentage of dyssynchrony as compared with healthy subjects (dyssynchrony index [DI] = 15.0 ± 5.0% vs. 7.5 ± 2.0%, P 

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  11. Vicnesh J, Wei JKE, Ciaccio EJ, Oh SL, Bhagat G, Lewis SK, et al.
    J Med Syst, 2019 Apr 26;43(6):157.
    PMID: 31028562 DOI: 10.1007/s10916-019-1285-6
    Celiac disease is a genetically determined disorder of the small intestine, occurring due to an immune response to ingested gluten-containing food. The resulting damage to the small intestinal mucosa hampers nutrient absorption, and is characterized by diarrhea, abdominal pain, and a variety of extra-intestinal manifestations. Invasive and costly methods such as endoscopic biopsy are currently used to diagnose celiac disease. Detection of the disease by histopathologic analysis of biopsies can be challenging due to suboptimal sampling. Video capsule images were obtained from celiac patients and controls for comparison and classification. This study exploits the use of DAISY descriptors to project two-dimensional images onto one-dimensional vectors. Shannon entropy is then used to extract features, after which a particle swarm optimization algorithm coupled with normalization is employed to select the 30 best features for classification. Statistical measures of this paradigm were tabulated. The accuracy, positive predictive value, sensitivity and specificity obtained in distinguishing celiac versus control video capsule images were 89.82%, 89.17%, 94.35% and 83.20% respectively, using the 10-fold cross-validation technique. When employing manual methods rather than the automated means described in this study, technical limitations and inconclusive results may hamper diagnosis. Our findings suggest that the computer-aided detection system presented herein can render diagnostic information, and thus may provide clinicians with an important tool to validate a diagnosis of celiac disease.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  12. Acharya UR, Faust O, Ciaccio EJ, Koh JEW, Oh SL, Tan RS, et al.
    Comput Methods Programs Biomed, 2019 Jul;175:163-178.
    PMID: 31104705 DOI: 10.1016/j.cmpb.2019.04.018
    BACKGROUND AND OBJECTIVE: Complex fractionated atrial electrograms (CFAE) may contain information concerning the electrophysiological substrate of atrial fibrillation (AF); therefore they are of interest to guide catheter ablation treatment of AF. Electrogram signals are shaped by activation events, which are dynamical in nature. This makes it difficult to establish those signal properties that can provide insight into the ablation site location. Nonlinear measures may improve information. To test this hypothesis, we used nonlinear measures to analyze CFAE.

    METHODS: CFAE from several atrial sites, recorded for a duration of 16 s, were acquired from 10 patients with persistent and 9 patients with paroxysmal AF. These signals were appraised using non-overlapping windows of 1-, 2- and 4-s durations. The resulting data sets were analyzed with Recurrence Plots (RP) and Recurrence Quantification Analysis (RQA). The data was also quantified via entropy measures.

    RESULTS: RQA exhibited unique plots for persistent versus paroxysmal AF. Similar patterns were observed to be repeated throughout the RPs. Trends were consistent for signal segments of 1 and 2 s as well as 4 s in duration. This was suggestive that the underlying signal generation process is also repetitive, and that repetitiveness can be detected even in 1-s sequences. The results also showed that most entropy metrics exhibited higher measurement values (closer to equilibrium) for persistent AF data. It was also found that Determinism (DET), Trapping Time (TT), and Modified Multiscale Entropy (MMSE), extracted from signals that were acquired from locations at the posterior atrial free wall, are highly discriminative of persistent versus paroxysmal AF data.

    CONCLUSIONS: Short data sequences are sufficient to provide information to discern persistent versus paroxysmal AF data with a significant difference, and can be useful to detect repeating patterns of atrial activation.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  13. Safdar A, Khan MA, Shah JH, Sharif M, Saba T, Rehman A, et al.
    Microsc Res Tech, 2019 Sep;82(9):1542-1556.
    PMID: 31209970 DOI: 10.1002/jemt.23320
    Plant diseases are accountable for economic losses in an agricultural country. The manual process of plant diseases diagnosis is a key challenge from last one decade; therefore, researchers in this area introduced automated systems. In this research work, automated system is proposed for citrus fruit diseases recognition using computer vision technique. The proposed method incorporates five fundamental steps such as preprocessing, disease segmentation, feature extraction and reduction, fusion, and classification. The noise is being removed followed by a contrast stretching procedure in the very first phase. Later, watershed method is applied to excerpt the infectious regions. The shape, texture, and color features are subsequently computed from these infection regions. In the fourth step, reduced features are fused using serial-based approach followed by a final step of classification using multiclass support vector machine. For dimensionality reduction, principal component analysis is utilized, which is a statistical procedure that enforces an orthogonal transformation on a set of observations. Three different image data sets (Citrus Image Gallery, Plant Village, and self-collected) are combined in this research to achieving a classification accuracy of 95.5%. From the stats, it is quite clear that our proposed method outperforms several existing methods with greater precision and accuracy.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  14. Al-Shabi M, Lan BL, Chan WY, Ng KH, Tan M
    Int J Comput Assist Radiol Surg, 2019 Oct;14(10):1815-1819.
    PMID: 31020576 DOI: 10.1007/s11548-019-01981-7
    PURPOSE: Lung nodules have very diverse shapes and sizes, which makes classifying them as benign/malignant a challenging problem. In this paper, we propose a novel method to predict the malignancy of nodules that have the capability to analyze the shape and size of a nodule using a global feature extractor, as well as the density and structure of the nodule using a local feature extractor.

    METHODS: We propose to use Residual Blocks with a 3 × 3 kernel size for local feature extraction and Non-Local Blocks to extract the global features. The Non-Local Block has the ability to extract global features without using a huge number of parameters. The key idea behind the Non-Local Block is to apply matrix multiplications between features on the same feature maps.

    RESULTS: We trained and validated the proposed method on the LIDC-IDRI dataset which contains 1018 computed tomography scans. We followed a rigorous procedure for experimental setup, namely tenfold cross-validation, and ignored the nodules that had been annotated by

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  15. Bilal M, Anis H, Khan N, Qureshi I, Shah J, Kadir KA
    Biomed Res Int, 2019;2019:6139785.
    PMID: 31119178 DOI: 10.1155/2019/6139785
    Background: Motion is a major source of blurring and ghosting in recovered MR images. It is more challenging in Dynamic Contrast Enhancement (DCE) MRI because motion effects and rapid intensity changes in contrast agent are difficult to distinguish from each other.

    Material and Methods: In this study, we have introduced a new technique to reduce the motion artifacts, based on data binning and low rank plus sparse (L+S) reconstruction method for DCE MRI. For Data binning, radial k-space data is acquired continuously using the golden-angle radial sampling pattern and grouped into various motion states or bins. The respiratory signal for binning is extracted directly from radially acquired k-space data. A compressed sensing- (CS-) based L+S matrix decomposition model is then used to reconstruct motion sorted DCE MR images. Undersampled free breathing 3D liver and abdominal DCE MR data sets are used to validate the proposed technique.

    Results: The performance of the technique is compared with conventional L+S decomposition qualitatively along with the image sharpness and structural similarity index. Recovered images are visually sharper and have better similarity with reference images.

    Conclusion: L+S decomposition provides improved MR images with data binning as preprocessing step in free breathing scenario. Data binning resolves the respiratory motion by dividing different respiratory positions in multiple bins. It also differentiates the respiratory motion and contrast agent (CA) variations. MR images recovered for each bin are better as compared to the method without data binning.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  16. Said MA, Musarudin M, Zulkaffli NF
    Ann Nucl Med, 2020 Dec;34(12):884-891.
    PMID: 33141408 DOI: 10.1007/s12149-020-01543-x
    OBJECTIVE: 18F is the most extensively used radioisotope in current clinical practices of PET imaging. This selection is based on the several criteria of pure PET radioisotopes with an optimum half-life, and low positron energy that contributes to a smaller positron range. In addition to 18F, other radioisotopes such as 68Ga and 124I are currently gained much attention with the increase in interest in new PET tracers entering the clinical trials. This study aims to determine the minimal scan time per bed position (Tmin) for the 124I and 68Ga based on the quantitative differences in PET imaging of 68Ga and 124I relative to 18F.

    METHODS: The European Association of Nuclear Medicine (EANM) procedure guidelines version 2.0 for FDG-PET tumor imaging has adhered for this purpose. A NEMA2012/IEC2008 phantom was filled with tumor to background ratio of 10:1 with the activity concentration of 30 kBq/ml ± 10 and 3 kBq/ml ± 10% for each radioisotope. The phantom was scanned using different acquisition times per bed position (1, 5, 7, 10 and 15 min) to determine the Tmin. The definition of Tmin was performed using an image coefficient of variations (COV) of 15%.

    RESULTS: Tmin obtained for 18F, 68Ga and 124I were 3.08, 3.24 and 32.93 min, respectively. Quantitative analyses among 18F, 68Ga and 124I images were performed. Signal-to-noise ratio (SNR), contrast recovery coefficients (CRC), and visibility (VH) are the image quality parameters analysed in this study. Generally, 68Ga and 18F gave better image quality as compared to 124I for all the parameters studied.

    CONCLUSION: We have defined Tmin for 18F, 68Ga and 124I SPECT CT imaging based on NEMA2012/IEC2008 phantom imaging. Despite the long scanning time suggested by Tmin, improvement in the image quality is acquired especially for 124I. In clinical practice, the long acquisition time, nevertheless, may cause patient discomfort and motion artifact.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  17. Rajagopal H, Mokhtar N, Tengku Mohmed Noor Izam TF, Wan Ahmad WK
    PLoS One, 2020;15(5):e0233320.
    PMID: 32428043 DOI: 10.1371/journal.pone.0233320
    Image Quality Assessment (IQA) is essential for the accuracy of systems for automatic recognition of tree species for wood samples. In this study, a No-Reference IQA (NR-IQA), wood NR-IQA (WNR-IQA) metric was proposed to assess the quality of wood images. Support Vector Regression (SVR) was trained using Generalized Gaussian Distribution (GGD) and Asymmetric Generalized Gaussian Distribution (AGGD) features, which were measured for wood images. Meanwhile, the Mean Opinion Score (MOS) was obtained from the subjective evaluation. This was followed by a comparison between the proposed IQA metric, WNR-IQA, and three established NR-IQA metrics, namely Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), deepIQA, Deep Bilinear Convolutional Neural Networks (DB-CNN), and five Full Reference-IQA (FR-IQA) metrics known as MSSIM, SSIM, FSIM, IWSSIM, and GMSD. The proposed WNR-IQA metric, BRISQUE, deepIQA, DB-CNN, and FR-IQAs were then compared with MOS values to evaluate the performance of the automatic IQA metrics. As a result, the WNR-IQA metric exhibited a higher performance compared to BRISQUE, deepIQA, DB-CNN, and FR-IQA metrics. Highest quality images may not be routinely available due to logistic factors, such as dust, poor illumination, and hot environment present in the timber industry. Moreover, motion blur could occur due to the relative motion between the camera and the wood slice. Therefore, the advantage of WNR-IQA could be seen from its independency from a "perfect" reference image for the image quality evaluation.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  18. Ninomiya K, Arimura H, Chan WY, Tanaka K, Mizuno S, Muhammad Gowdh NF, et al.
    PLoS One, 2021;16(1):e0244354.
    PMID: 33428651 DOI: 10.1371/journal.pone.0244354
    OBJECTIVES: To propose a novel robust radiogenomics approach to the identification of epidermal growth factor receptor (EGFR) mutations among patients with non-small cell lung cancer (NSCLC) using Betti numbers (BNs).

    MATERIALS AND METHODS: Contrast enhanced computed tomography (CT) images of 194 multi-racial NSCLC patients (79 EGFR mutants and 115 wildtypes) were collected from three different countries using 5 manufacturers' scanners with a variety of scanning parameters. Ninety-nine cases obtained from the University of Malaya Medical Centre (UMMC) in Malaysia were used for training and validation procedures. Forty-one cases collected from the Kyushu University Hospital (KUH) in Japan and fifty-four cases obtained from The Cancer Imaging Archive (TCIA) in America were used for a test procedure. Radiomic features were obtained from BN maps, which represent topologically invariant heterogeneous characteristics of lung cancer on CT images, by applying histogram- and texture-based feature computations. A BN-based signature was determined using support vector machine (SVM) models with the best combination of features that maximized a robustness index (RI) which defined a higher total area under receiver operating characteristics curves (AUCs) and lower difference of AUCs between the training and the validation. The SVM model was built using the signature and optimized in a five-fold cross validation. The BN-based model was compared to conventional original image (OI)- and wavelet-decomposition (WD)-based models with respect to the RI between the validation and the test.

    RESULTS: The BN-based model showed a higher RI of 1.51 compared with the models based on the OI (RI: 1.33) and the WD (RI: 1.29).

    CONCLUSION: The proposed model showed higher robustness than the conventional models in the identification of EGFR mutations among NSCLC patients. The results suggested the robustness of the BN-based approach against variations in image scanner/scanning parameters.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  19. Chen Z, Rajamanickam L, Cao J, Zhao A, Hu X
    PLoS One, 2021;16(12):e0260758.
    PMID: 34879097 DOI: 10.1371/journal.pone.0260758
    This study aims to solve the overfitting problem caused by insufficient labeled images in the automatic image annotation field. We propose a transfer learning model called CNN-2L that incorporates the label localization strategy described in this study. The model consists of an InceptionV3 network pretrained on the ImageNet dataset and a label localization algorithm. First, the pretrained InceptionV3 network extracts features from the target dataset that are used to train a specific classifier and fine-tune the entire network to obtain an optimal model. Then, the obtained model is used to derive the probabilities of the predicted labels. For this purpose, we introduce a squeeze and excitation (SE) module into the network architecture that augments the useful feature information, inhibits useless feature information, and conducts feature reweighting. Next, we perform label localization to obtain the label probabilities and determine the final label set for each image. During this process, the number of labels must be determined. The optimal K value is obtained experimentally and used to determine the number of predicted labels, thereby solving the empty label set problem that occurs when the predicted label values of images are below a fixed threshold. Experiments on the Corel5k multilabel image dataset verify that CNN-2L improves the labeling precision by 18% and 15% compared with the traditional multiple-Bernoulli relevance model (MBRM) and joint equal contribution (JEC) algorithms, respectively, and it improves the recall by 6% compared with JEC. Additionally, it improves the precision by 20% and 11% compared with the deep learning methods Weight-KNN and adaptive hypergraph learning (AHL), respectively. Although CNN-2L fails to improve the recall compared with the semantic extension model (SEM), it improves the comprehensive index of the F1 value by 1%. The experimental results reveal that the proposed transfer learning model based on a label localization strategy is effective for automatic image annotation and substantially boosts the multilabel image annotation performance.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  20. Aly CA, Abas FS, Ann GH
    Sci Prog, 2021;104(2):368504211005480.
    PMID: 33913378 DOI: 10.1177/00368504211005480
    INTRODUCTION: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval.

    OBJECTIVES: The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms.

    METHODS: This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification.

    RESULTS: The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann.

    CONCLUSION: Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links