Displaying publications 1 - 20 of 163 in total

Abstract:
Sort:
  1. Too CW, Fong KY, Hang G, Sato T, Nyam CQ, Leong SH, et al.
    J Vasc Interv Radiol, 2024 May;35(5):780-789.e1.
    PMID: 38355040 DOI: 10.1016/j.jvir.2024.02.006
    PURPOSE: To validate the sensitivity and specificity of a 3-dimensional (3D) convolutional neural network (CNN) artificial intelligence (AI) software for lung lesion detection and to establish concordance between AI-generated needle paths and those used in actual biopsy procedures.

    MATERIALS AND METHODS: This was a retrospective study using computed tomography (CT) scans from 3 hospitals. Inclusion criteria were scans with 1-5 nodules of diameter ≥5 mm; exclusion criteria were poor-quality scans or those with nodules measuring <5mm in diameter. In the lesion detection phase, 2,147 nodules from 219 scans were used to develop and train the deep learning 3D-CNN to detect lesions. The 3D-CNN was validated with 235 scans (354 lesions) for sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) analysis. In the path planning phase, Bayesian optimization was used to propose possible needle trajectories for lesion biopsy while avoiding vital structures. Software-proposed needle trajectories were compared with actual biopsy path trajectories from intraprocedural CT scans in 150 patients, with a match defined as an angular deviation of <5° between the 2 trajectories.

    RESULTS: The model achieved an overall AUC of 97.4% (95% CI, 96.3%-98.2%) for lesion detection, with mean sensitivity of 93.5% and mean specificity of 93.2%. Among the software-proposed needle trajectories, 85.3% were feasible, with 82% matching actual paths and similar performance between supine and prone/oblique patient orientations (P = .311). The mean angular deviation between matching trajectories was 2.30° (SD ± 1.22); the mean path deviation was 2.94 mm (SD ± 1.60).

    CONCLUSIONS: Segmentation, lesion detection, and path planning for CT-guided lung biopsy using an AI-guided software showed promising results. Future integration with automated robotic systems may pave the way toward fully automated biopsy procedures.

    Matched MeSH terms: Radiographic Image Interpretation, Computer-Assisted
  2. Tan SL, Selvachandran G, Ding W, Paramesran R, Kotecha K
    Interdiscip Sci, 2024 Mar;16(1):16-38.
    PMID: 37962777 DOI: 10.1007/s12539-023-00589-5
    As one of the most common female cancers, cervical cancer often develops years after a prolonged and reversible pre-cancerous stage. Traditional classification algorithms used for detection of cervical cancer often require cell segmentation and feature extraction techniques, while convolutional neural network (CNN) models demand a large dataset to mitigate over-fitting and poor generalization problems. To this end, this study aims to develop deep learning models for automated cervical cancer detection that do not rely on segmentation methods or custom features. Due to limited data availability, transfer learning was employed with pre-trained CNN models to directly operate on Pap smear images for a seven-class classification task. Thorough evaluation and comparison of 13 pre-trained deep CNN models were performed using the publicly available Herlev dataset and the Keras package in Google Collaboratory. In terms of accuracy and performance, DenseNet-201 is the best-performing model. The pre-trained CNN models studied in this paper produced good experimental results and required little computing time.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods
  3. Liu F, Wang H, Liang SN, Jin Z, Wei S, Li X, et al.
    Comput Biol Med, 2023 May;157:106790.
    PMID: 36958239 DOI: 10.1016/j.compbiomed.2023.106790
    Structural magnetic resonance imaging (sMRI) is a popular technique that is widely applied in Alzheimer's disease (AD) diagnosis. However, only a few structural atrophy areas in sMRI scans are highly associated with AD. The degree of atrophy in patients' brain tissues and the distribution of lesion areas differ among patients. Therefore, a key challenge in sMRI-based AD diagnosis is identifying discriminating atrophy features. Hence, we propose a multiplane and multiscale feature-level fusion attention (MPS-FFA) model. The model has three components, (1) A feature encoder uses a multiscale feature extractor with hybrid attention layers to simultaneously capture and fuse multiple pathological features in the sagittal, coronal, and axial planes. (2) A global attention classifier combines clinical scores and two global attention layers to evaluate the feature impact scores and balance the relative contributions of different feature blocks. (3) A feature similarity discriminator minimizes the feature similarities among heterogeneous labels to enhance the ability of the network to discriminate atrophy features. The MPS-FFA model provides improved interpretability for identifying discriminating features using feature visualization. The experimental results on the baseline sMRI scans from two databases confirm the effectiveness (e.g., accuracy and generalizability) of our method in locating pathological locations. The source code is available at https://github.com/LiuFei-AHU/MPSFFA.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods
  4. Kaplan E, Baygin M, Barua PD, Dogan S, Tuncer T, Altunisik E, et al.
    Med Eng Phys, 2023 May;115:103971.
    PMID: 37120169 DOI: 10.1016/j.medengphy.2023.103971
    PURPOSE: The classification of medical images is an important priority for clinical research and helps to improve the diagnosis of various disorders. This work aims to classify the neuroradiological features of patients with Alzheimer's disease (AD) using an automatic hand-modeled method with high accuracy.

    MATERIALS AND METHOD: This work uses two (private and public) datasets. The private dataset consists of 3807 magnetic resonance imaging (MRI) and computer tomography (CT) images belonging to two (normal and AD) classes. The second public (Kaggle AD) dataset contains 6400 MR images. The presented classification model comprises three fundamental phases: feature extraction using an exemplar hybrid feature extractor, neighborhood component analysis-based feature selection, and classification utilizing eight different classifiers. The novelty of this model is feature extraction. Vision transformers inspire this phase, and hence 16 exemplars are generated. Histogram-oriented gradients (HOG), local binary pattern (LBP) and local phase quantization (LPQ) feature extraction functions have been applied to each exemplar/patch and raw brain image. Finally, the created features are merged, and the best features are selected using neighborhood component analysis (NCA). These features are fed to eight classifiers to obtain highest classification performance using our proposed method. The presented image classification model uses exemplar histogram-based features; hence, it is called ExHiF.

    RESULTS: We have developed the ExHiF model with a ten-fold cross-validation strategy using two (private and public) datasets with shallow classifiers. We have obtained 100% classification accuracy using cubic support vector machine (CSVM) and fine k nearest neighbor (FkNN) classifiers for both datasets.

    CONCLUSIONS: Our developed model is ready to be validated with more datasets and has the potential to be employed in mental hospitals to assist neurologists in confirming their manual screening of AD using MRI/CT images.

    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods
  5. Adibah Yusof NA, Abdul Karim MK, Asikin NM, Paiman S, Awang Kechik MM, Abdul Rahman MA, et al.
    Curr Med Imaging, 2023;19(10):1105-1113.
    PMID: 35975862 DOI: 10.2174/1573405618666220816160544
    BACKGROUND: For almost three decades, computed tomography (CT) has been extensively used in medical diagnosis, which led researchers to conduct linking of CT dose exposure with image quality.

    METHODS: In this study, a systematic review and a meta-analysis study were conducted on CT phantom for resolution study especially based on the low contrast detectability (LCD). Furthermore, the association between the CT parameter such as tube voltage and the type of reconstruction algorithm, the amount of phantom scanning affecting the image quality and the exposure dose were also investigated in this study. We utilize PubMed, ScienceDirect, Google Scholar and Scopus databases to search related published articles from the year 2011 until 2020. The notable keywords comprise "computed tomography", "CT phantom", and "low contrast detectability". Of 52 articles, 20 articles are within the inclusion criteria in this systematic review.

    RESULTS: The dichotomous outcomes were chosen to represent the results in terms of risk ratio as per meta-analysis study. Notably, the noise in iterative reconstruction (IR) reduced by 24%, 33% and 36% with the use of smooth, medium and sharp filters, respectively. Furthermore, adaptive iterative dose reduction (AIDR 3D) improved image quality and the visibility of smaller less dense objects compared to filtered back-projection. Most of the researchers used 120 kVp tube voltage to scan phantom for quality assurance study.

    CONCLUSION: Hence, optimizing primary factors such as tube potential reduces the dose exposure significantly, and the optimized IR technique could substantially reduce the radiation dose while maintaining the image quality.

    Matched MeSH terms: Radiographic Image Interpretation, Computer-Assisted/methods
  6. Dey A, Chattopadhyay S, Singh PK, Ahmadian A, Ferrara M, Senu N, et al.
    Sci Rep, 2021 Dec 15;11(1):24065.
    PMID: 34911977 DOI: 10.1038/s41598-021-02731-z
    COVID-19 is a respiratory disease that causes infection in both lungs and the upper respiratory tract. The World Health Organization (WHO) has declared it a global pandemic because of its rapid spread across the globe. The most common way for COVID-19 diagnosis is real-time reverse transcription-polymerase chain reaction (RT-PCR) which takes a significant amount of time to get the result. Computer based medical image analysis is more beneficial for the diagnosis of such disease as it can give better results in less time. Computed Tomography (CT) scans are used to monitor lung diseases including COVID-19. In this work, a hybrid model for COVID-19 detection has developed which has two key stages. In the first stage, we have fine-tuned the parameters of the pre-trained convolutional neural networks (CNNs) to extract some features from the COVID-19 affected lungs. As pre-trained CNNs, we have used two standard CNNs namely, GoogleNet and ResNet18. Then, we have proposed a hybrid meta-heuristic feature selection (FS) algorithm, named as Manta Ray Foraging based Golden Ratio Optimizer (MRFGRO) to select the most significant feature subset. The proposed model is implemented over three publicly available datasets, namely, COVID-CT dataset, SARS-COV-2 dataset, and MOSMED dataset, and attains state-of-the-art classification accuracies of 99.15%, 99.42% and 95.57% respectively. Obtained results confirm that the proposed approach is quite efficient when compared to the local texture descriptors used for COVID-19 detection from chest CT-scan images.
    Matched MeSH terms: Radiographic Image Interpretation, Computer-Assisted/methods*
  7. Ong SQ, Ahmad H, Nair G, Isawasan P, Majid AHA
    Sci Rep, 2021 05 10;11(1):9908.
    PMID: 33972645 DOI: 10.1038/s41598-021-89365-3
    Classification of Aedes aegypti (Linnaeus) and Aedes albopictus (Skuse) by humans remains challenging. We proposed a highly accessible method to develop a deep learning (DL) model and implement the model for mosquito image classification by using hardware that could regulate the development process. In particular, we constructed a dataset with 4120 images of Aedes mosquitoes that were older than 12 days old and had common morphological features that disappeared, and we illustrated how to set up supervised deep convolutional neural networks (DCNNs) with hyperparameter adjustment. The model application was first conducted by deploying the model externally in real time on three different generations of mosquitoes, and the accuracy was compared with human expert performance. Our results showed that both the learning rate and epochs significantly affected the accuracy, and the best-performing hyperparameters achieved an accuracy of more than 98% at classifying mosquitoes, which showed no significant difference from human-level performance. We demonstrated the feasibility of the method to construct a model with the DCNN when deployed externally on mosquitoes in real time.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  8. Pathan F, Zainal Abidin HA, Vo QH, Zhou H, D'Angelo T, Elen E, et al.
    Eur Heart J Cardiovasc Imaging, 2021 01 01;22(1):102-110.
    PMID: 31848575 DOI: 10.1093/ehjci/jez303
    AIMS: Left atrial (LA) strain is a prognostic biomarker with utility across a spectrum of acute and chronic cardiovascular pathologies. There are limited data on intervendor differences and no data on intermodality differences for LA strain. We sought to compare the intervendor and intermodality differences between transthoracic echocardiography (TTE) and cardiac magnetic resonance (CMR) derived LA strain. We hypothesized that various components of atrial strain would show good intervendor and intermodality correlation but that there would be systematic differences between vendors and modalities.

    METHODS AND RESULTS: We evaluated 54 subjects (43 patients with a clinical indication for CMR and 11 healthy volunteers) in a study comparing TTE- and CMR-derived LA reservoir strain (ƐR), conduit strain (ƐCD), and contractile strain (ƐCT). The LA strain components were evaluated using four dedicated types of post-processing software. We evaluated the correlation and systematic bias between modalities and within each modality. Intervendor and intermodality correlation was: ƐR [intraclass correlation coefficient (ICC 0.64-0.90)], ƐCD (ICC 0.62-0.89), and ƐCT (ICC 0.58-0.77). There was evidence of systematic bias between vendors and modalities with mean differences ranging from (3.1-12.2%) for ƐR, ƐCD (1.6-8.6%), and ƐCT (0.3-3.6%). Reproducibility analysis revealed intraobserver coefficient of variance (COV) of 6.5-14.6% and interobserver COV of 9.9-18.7%.

    CONCLUSION: Vendor derived ƐR, ƐCD, and ƐCT demonstrates modest to excellent intervendor and intermodality correlation depending on strain component examined. There are systematic differences in measurements depending on modality and vendor. These differences may be addressed by future studies, which, examine calibration of LA geometry/higher frame rate imaging, semi-quantitative approaches, and improvements in reproducibility.

    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods
  9. Silva H, Chellappan K, Karunaweera N
    Comput Math Methods Med, 2021;2021:4208254.
    PMID: 34873414 DOI: 10.1155/2021/4208254
    Skin lesions are a feature of many diseases including cutaneous leishmaniasis (CL). Ulcerative lesions are a common manifestation of CL. Response to treatment in such lesions is judged through the assessment of the healing process by regular clinical observations, which remains a challenge for the clinician, health system, and the patient in leishmaniasis endemic countries. In this study, image processing was initially done using 40 CL lesion color images that were captured using a mobile phone camera, to establish a technique to extract features from the image which could be related to the clinical status of the lesion. The identified techniques were further developed, and ten ulcer images were analyzed to detect the extent of inflammatory response and/or signs of healing using pattern recognition of inflammatory tissue captured in the image. The images were preprocessed at the outset, and the quality was improved using the CIE L∗a∗b color space technique. Furthermore, features were extracted using the principal component analysis and profiled using the signal spectrogram technique. This study has established an adaptive thresholding technique ranging between 35 and 200 to profile the skin lesion images using signal spectrogram plotted using Signal Analyzer in MATLAB. The outcome indicates its potential utility in visualizing and assessing inflammatory tissue response in a CL ulcer. This approach is expected to be developed further to a mHealth-based prediction algorithm to enable remote monitoring of treatment response of cutaneous leishmaniasis.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods
  10. Ranjbarzadeh R, Jafarzadeh Ghoushchi S, Bendechache M, Amirabadi A, Ab Rahman MN, Baseri Saadi S, et al.
    Biomed Res Int, 2021;2021:5544742.
    PMID: 33954175 DOI: 10.1155/2021/5544742
    The COVID-19 pandemic is a global, national, and local public health concern which has caused a significant outbreak in all countries and regions for both males and females around the world. Automated detection of lung infections and their boundaries from medical images offers a great potential to augment the patient treatment healthcare strategies for tackling COVID-19 and its impacts. Detecting this disease from lung CT scan images is perhaps one of the fastest ways to diagnose patients. However, finding the presence of infected tissues and segment them from CT slices faces numerous challenges, including similar adjacent tissues, vague boundary, and erratic infections. To eliminate these obstacles, we propose a two-route convolutional neural network (CNN) by extracting global and local features for detecting and classifying COVID-19 infection from CT images. Each pixel from the image is classified into the normal and infected tissues. For improving the classification accuracy, we used two different strategies including fuzzy c-means clustering and local directional pattern (LDN) encoding methods to represent the input image differently. This allows us to find more complex pattern from the image. To overcome the overfitting problems due to small samples, an augmentation approach is utilized. The results demonstrated that the proposed framework achieved precision 96%, recall 97%, F score, average surface distance (ASD) of 2.8 ± 0.3 mm, and volume overlap error (VOE) of 5.6 ± 1.2%.
    Matched MeSH terms: Radiographic Image Interpretation, Computer-Assisted
  11. Abdullah KA, McEntee MF, Reed WM, Kench PL
    J Appl Clin Med Phys, 2020 Sep;21(9):209-214.
    PMID: 32657493 DOI: 10.1002/acm2.12977
    PURPOSE: The purpose of this study was to investigate the effect of increasing iterative reconstruction (IR) algorithm strength at different tube voltages in coronary computed tomography angiography (CCTA) protocols using a three-dimensional (3D)-printed and Catphan® 500 phantoms.

    METHODS: A 3D-printed cardiac insert and Catphan 500 phantoms were scanned using CCTA protocols at 120 and 100 kVp tube voltages. All CT acquisitions were reconstructed using filtered back projection (FBP) and Adaptive Statistical Iterative Reconstruction (ASIR) algorithm at 40% and 60% strengths. Image quality characteristics such as image noise, signal-noise ratio (SNR), contrast-noise ratio (CNR), high spatial resolution, and low contrast resolution were analyzed.

    RESULTS: There was no significant difference (P > 0.05) between 120 and 100 kVp measures for image noise for FBP vs ASIR 60% (16.6 ± 3.8 vs 16.7 ± 4.8), SNR of ASIR 40% vs ASIR 60% (27.3 ± 5.4 vs 26.4 ± 4.8), and CNR of FBP vs ASIR 40% (31.3 ± 3.9 vs 30.1 ± 4.3), respectively. Based on the Modulation Transfer Function (MTF) analysis, there was a minimal change of image quality for each tube voltage but increases when higher strengths of ASIR were used. The best measure of low contrast detectability was observed at ASIR 60% at 120 kVp.

    CONCLUSIONS: Changing the IR strength has yielded different image quality noise characteristics. In this study, the use of 100 kVp and ASIR 60% yielded comparable image quality noise characteristics to the standard CCTA protocols using 120 kVp of ASIR 40%. A combination of 3D-printed and Catphan® 500 phantoms could be used to perform CT dose optimization protocols.

    Matched MeSH terms: Radiographic Image Interpretation, Computer-Assisted
  12. Ramli N, Lim CH, Rajagopal R, Tan LK, Seow P, Ariffin H
    Pediatr Radiol, 2020 08;50(9):1277-1283.
    PMID: 32591982 DOI: 10.1007/s00247-020-04717-x
    BACKGROUND: Intrathecal and intravenous chemotherapy, specifically methotrexate, might contribute to neural microstructural damage.

    OBJECTIVE: To assess, by diffusion tensor imaging, microstructural integrity of white matter in paediatric patients with acute lymphoblastic leukaemia (ALL) following intrathecal and intravenous chemotherapy.

    MATERIALS AND METHODS: Eleven children diagnosed with de novo ALL underwent MRI scans of the brain with diffusion tensor imaging (DTI) prior to commencement of chemotherapy and at 12 months after diagnosis, using a 3-tesla (T) MRI scanner. We investigated the changes in DTI parameters in white matter tracts before and after chemotherapy using tract-based spatial statistics overlaid on the International Consortium of Brain Mapping DTI-81 atlas. All of the children underwent formal neurodevelopmental assessment at the two study time points.

    RESULTS: Whole-brain DTI analysis showed significant changes between the two time points, affecting several white matter tracts. The tracts demonstrated longitudinal changes of decreasing mean and radial diffusivity. The neurodevelopment of the children was near compatible for age at the end of ALL treatment.

    CONCLUSION: The quantification of white matter tracts changes in children undergoing chemotherapy showed improving longitudinal values in DTI metrics (stable fractional anisotropy, decreasing mean and radial diffusivity), which are incompatible with deterioration of microstructural integrity in these children.

    Matched MeSH terms: Image Interpretation, Computer-Assisted
  13. Shaffiq Said Rahmat SM, Abdul Karim MK, Che Isa IN, Abd Rahman MA, Noor NM, Hoong NK
    Comput Biol Med, 2020 08;123:103840.
    PMID: 32658782 DOI: 10.1016/j.compbiomed.2020.103840
    BACKGROUND: Unoptimized protocols, including a miscentered position, might affect the outcome of diagnostic in CT examinations. In this study, we investigate the effects of miscentering position during CT head examination on the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR).

    METHOD: We simulate the CT head examination using a water phantom with a standard protocol (120 kVp/180 mAs) and a low dose protocol (100 kVp/142 mAs). The table height was adjusted to simulate miscentering by 5 cm from the isocenter, where the height was miscentered superiorly (MCS) at 109, 114, 119, and 124 cm, and miscentered inferiorly (MCI) at 99, 94, 89, and 84 cm. Seven circular regions of interest were used, with one drawn at the center, four at the peripheral area of the phantom, and two at the background area of the image.

    RESULTS: For the standard protocol, the mean CNR decreased uniformly as table height increased and significantly differed (p 

    Matched MeSH terms: Radiographic Image Interpretation, Computer-Assisted
  14. Ardakani AA, Kanafi AR, Acharya UR, Khadem N, Mohammadi A
    Comput Biol Med, 2020 06;121:103795.
    PMID: 32568676 DOI: 10.1016/j.compbiomed.2020.103795
    Fast diagnostic methods can control and prevent the spread of pandemic diseases like coronavirus disease 2019 (COVID-19) and assist physicians to better manage patients in high workload conditions. Although a laboratory test is the current routine diagnostic tool, it is time-consuming, imposing a high cost and requiring a well-equipped laboratory for analysis. Computed tomography (CT) has thus far become a fast method to diagnose patients with COVID-19. However, the performance of radiologists in diagnosis of COVID-19 was moderate. Accordingly, additional investigations are needed to improve the performance in diagnosing COVID-19. In this study is suggested a rapid and valid method for COVID-19 diagnosis using an artificial intelligence technique based. 1020 CT slices from 108 patients with laboratory proven COVID-19 (the COVID-19 group) and 86 patients with other atypical and viral pneumonia diseases (the non-COVID-19 group) were included. Ten well-known convolutional neural networks were used to distinguish infection of COVID-19 from non-COVID-19 groups: AlexNet, VGG-16, VGG-19, SqueezeNet, GoogleNet, MobileNet-V2, ResNet-18, ResNet-50, ResNet-101, and Xception. Among all networks, the best performance was achieved by ResNet-101 and Xception. ResNet-101 could distinguish COVID-19 from non-COVID-19 cases with an AUC of 0.994 (sensitivity, 100%; specificity, 99.02%; accuracy, 99.51%). Xception achieved an AUC of 0.994 (sensitivity, 98.04%; specificity, 100%; accuracy, 99.02%). However, the performance of the radiologist was moderate with an AUC of 0.873 (sensitivity, 89.21%; specificity, 83.33%; accuracy, 86.27%). ResNet-101 can be considered as a high sensitivity model to characterize and diagnose COVID-19 infections, and can be used as an adjuvant tool in radiology departments.
    Matched MeSH terms: Radiographic Image Interpretation, Computer-Assisted
  15. Schwartz TM, Hillis SL, Sridharan R, Lukyanchenko O, Geiser W, Whitman GJ, et al.
    J Med Imaging (Bellingham), 2020 Mar;7(2):022408.
    PMID: 32042859 DOI: 10.1117/1.JMI.7.2.022408
    Purpose: Computer-aided detection (CAD) alerts radiologists to findings potentially associated with breast cancer but is notorious for creating false-positive marks. Although a previous paper found that radiologists took more time to interpret mammograms with more CAD marks, our impression was that this was not true in actual interpretation. We hypothesized that radiologists would selectively disregard these marks when present in larger numbers. Approach: We performed a retrospective review of bilateral digital screening mammograms. We use a mixed linear regression model to assess the relationship between number of CAD marks and ln (interpretation time) after adjustment for covariates. Both readers and mammograms were treated as random sampling units. Results: Ten radiologists, with median experience after residency of 12.5 years (range 6 to 24) interpreted 1832 mammograms. After accounting for number of images, Breast Imaging Reporting and Data System category, and breast density, the number of CAD marks was positively associated with longer interpretation time, with each additional CAD mark proportionally increasing median interpretation time by 4.35% for a typical reader. Conclusions: We found no support for our hypothesis that radiologists will selectively disregard CAD marks when they are present in larger numbers.
    Matched MeSH terms: Radiographic Image Interpretation, Computer-Assisted
  16. Foo LS, Yap WS, Hum YC, Manan HA, Tee YK
    J Magn Reson, 2020 01;310:106648.
    PMID: 31760147 DOI: 10.1016/j.jmr.2019.106648
    Chemical exchange saturation transfer (CEST) magnetic resonance imaging (MRI) holds great potential to provide new metabolic information for clinical applications such as tumor, stroke and Parkinson's Disease diagnosis. Many active research and developments have been conducted to translate this emerging MRI technique for routine clinical applications. In general, there are two CEST quantification techniques: (i) model-free and (ii) model-based techniques. The reliability of these quantification techniques depends heavily on the experimental conditions and quality of the collected data. Errors such as noise may lead to misleading quantification results and thus inaccurate diagnosis when CEST imaging becomes a standard or routine imaging scan in the future. This paper investigates the accuracy and robustness of these quantification techniques under different signal-to-noise (SNR) levels and magnetic field strengths. The quantified CEST effect before and after adding random Gaussian White Noise using model-free and model-based quantification techniques were compared. It was found that the model-free technique consistently yielded larger average percentage error across all tested parameters compared to its model-based counterpart, and that the model-based technique could withstand SNR of about 3 times lower than the model-free technique. When applied on noisy brain tumor, ischemic stroke, and Parkinson's Disease clinical data, the model-free technique failed to produce significant differences between normal and abnormal tissue whereas the model-based technique consistently generated significant differences. Although the model-free technique was less accurate and robust, its simplicity and thus speed would still make it a good approximate when the SNR was high (>50) or when the CEST effect was large and well-defined. For more accurate CEST quantification, model-based techniques should be considered. When SNR was low (<50) and the CEST effect was small such as those acquired from clinical field strength scanners, which are generally 3T and below, model-based techniques should be considered over model-free counterpart to maintain an average percentage error of less than 44% even under very noisy condition as tested in this work.
    Matched MeSH terms: Image Interpretation, Computer-Assisted
  17. Shoaib MA, Hossain MB, Hum YC, Chuah JH, Mohd Salim MI, Lai KW
    Curr Med Imaging, 2020;16(6):739-751.
    PMID: 32723246 DOI: 10.2174/1573405615666190903143330
    BACKGROUND: Ultrasound (US) imaging can be a convenient and reliable substitute for magnetic resonance imaging in the investigation or screening of articular cartilage injury. However, US images suffer from two main impediments, i.e., low contrast ratio and presence of speckle noise.

    AIMS: A variation of anisotropic diffusion is proposed that can reduce speckle noise without compromising the image quality of the edges and other important details.

    METHODS: For this technique, four gradient thresholds were adopted instead of one. A new diffusivity function that preserves the edge of the resultant image is also proposed. To automatically terminate the iterative procedures, the Mean Absolute Error as its stopping criterion was implemented.

    RESULTS: Numerical results obtained by simulations unanimously indicate that the proposed method outperforms conventional speckle reduction techniques. Nevertheless, this preliminary study has been conducted based on a small number of asymptomatic subjects.

    CONCLUSION: Future work must investigate the feasibility of this method in a large cohort and its clinical validity through testing subjects with a symptomatic cartilage injury.

    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods*
  18. Khoo CS, Kim SE, Lee BI, Shin KJ, Ha SY, Park J, et al.
    Eur Neurol, 2020;83(1):56-64.
    PMID: 32320976 DOI: 10.1159/000506591
    INTRODUCTION: Seizures as acute stroke mimics are a diagnostic challenge.

    OBJECTIVE: The aim of the study was to characterize the perfusion patterns on perfusion computed tomography (PCT) in patients with seizures masquerading as acute stroke.

    METHODS: We conducted a study on patients with acute seizures as stroke mimics. The inclusion criteria for this study were patients (1) initially presenting with stroke-like symptoms but finally diagnosed to have seizures and (2) with PCT performed within 72 h of seizures. The PCT of seizure patients (n = 27) was compared with that of revascularized stroke patients (n = 20) as the control group.

    RESULTS: Among the 27 patients with seizures as stroke mimics, 70.4% (n = 19) showed characteristic PCT findings compared with the revascularized stroke patients, which were as follows: (1) multi-territorial cortical hyperperfusion {(73.7% [14/19] vs. 0% [0/20], p = 0.002), sensitivity of 73.7%, negative predictive value (NPV) of 80%}, (2) involvement of the ipsilateral thalamus {(57.9% [11/19] vs. 0% [0/20], p = 0.007), sensitivity of 57.9%, NPV of 71.4%}, and (3) reduced perfusion time {(84.2% [16/19] vs. 0% [0/20], p = 0.001), sensitivity of 84.2%, NPV of 87%}. These 3 findings had 100% specificity and positive predictive value in predicting patients with acute seizures in comparison with reperfused stroke patients. Older age was strongly associated with abnormal perfusion changes (p = 0.038), with a mean age of 66.8 ± 14.5 years versus 49.2 ± 27.4 years (in seizure patients with normal perfusion scan).

    CONCLUSIONS: PCT is a reliable tool to differentiate acute seizures from acute stroke in the emergency setting.

    Matched MeSH terms: Image Interpretation, Computer-Assisted
  19. Rajagopal H, Mokhtar N, Tengku Mohmed Noor Izam TF, Wan Ahmad WK
    PLoS One, 2020;15(5):e0233320.
    PMID: 32428043 DOI: 10.1371/journal.pone.0233320
    Image Quality Assessment (IQA) is essential for the accuracy of systems for automatic recognition of tree species for wood samples. In this study, a No-Reference IQA (NR-IQA), wood NR-IQA (WNR-IQA) metric was proposed to assess the quality of wood images. Support Vector Regression (SVR) was trained using Generalized Gaussian Distribution (GGD) and Asymmetric Generalized Gaussian Distribution (AGGD) features, which were measured for wood images. Meanwhile, the Mean Opinion Score (MOS) was obtained from the subjective evaluation. This was followed by a comparison between the proposed IQA metric, WNR-IQA, and three established NR-IQA metrics, namely Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), deepIQA, Deep Bilinear Convolutional Neural Networks (DB-CNN), and five Full Reference-IQA (FR-IQA) metrics known as MSSIM, SSIM, FSIM, IWSSIM, and GMSD. The proposed WNR-IQA metric, BRISQUE, deepIQA, DB-CNN, and FR-IQAs were then compared with MOS values to evaluate the performance of the automatic IQA metrics. As a result, the WNR-IQA metric exhibited a higher performance compared to BRISQUE, deepIQA, DB-CNN, and FR-IQA metrics. Highest quality images may not be routinely available due to logistic factors, such as dust, poor illumination, and hot environment present in the timber industry. Moreover, motion blur could occur due to the relative motion between the camera and the wood slice. Therefore, the advantage of WNR-IQA could be seen from its independency from a "perfect" reference image for the image quality evaluation.
    Matched MeSH terms: Image Interpretation, Computer-Assisted/methods
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links