Displaying publications 1 - 20 of 113 in total

Abstract:
Sort:
  1. Peng P, Wu D, Huang LJ, Wang J, Zhang L, Wu Y, et al.
    Interdiscip Sci, 2024 Mar;16(1):39-57.
    PMID: 37486420 DOI: 10.1007/s12539-023-00580-0
    Breast cancer is commonly diagnosed with mammography. Using image segmentation algorithms to separate lesion areas in mammography can facilitate diagnosis by doctors and reduce their workload, which has important clinical significance. Because large, accurately labeled medical image datasets are difficult to obtain, traditional clustering algorithms are widely used in medical image segmentation as an unsupervised model. Traditional unsupervised clustering algorithms have limited learning knowledge. Moreover, some semi-supervised fuzzy clustering algorithms cannot fully mine the information of labeled samples, which results in insufficient supervision. When faced with complex mammography images, the above algorithms cannot accurately segment lesion areas. To address this, a semi-supervised fuzzy clustering based on knowledge weighting and cluster center learning (WSFCM_V) is presented. According to prior knowledge, three learning modes are proposed: a knowledge weighting method for cluster centers, Euclidean distance weights for unlabeled samples, and learning from the cluster centers of labeled sample sets. These strategies improve the clustering performance. On real breast molybdenum target images, the WSFCM_V algorithm is compared with currently popular semi-supervised and unsupervised clustering algorithms. WSFCM_V has the best evaluation index values. Experimental results demonstrate that compared with the existing clustering algorithms, WSFCM_V has a higher segmentation accuracy than other clustering algorithms, both for larger lesion regions like tumor areas and for smaller lesion areas like calcification point areas.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  2. Liu H, Huang J, Li Q, Guan X, Tseng M
    Artif Intell Med, 2024 Feb;148:102776.
    PMID: 38325925 DOI: 10.1016/j.artmed.2024.102776
    This study proposes a deep convolutional neural network for the automatic segmentation of glioblastoma brain tumors, aiming sat replacing the manual segmentation method that is both time-consuming and labor-intensive. There are many challenges for automatic segmentation to finely segment sub-regions from multi-sequence magnetic resonance images because of the complexity and variability of glioblastomas, such as the loss of boundary information, misclassified regions, and subregion size. To overcome these challenges, this study introduces a spatial pyramid module and attention mechanism to the automatic segmentation algorithm, which focuses on multi-scale spatial details and context information. The proposed method has been tested in the public benchmarks BraTS 2018, BraTS 2019, BraTS 2020 and BraTS 2021 datasets. The Dice score on the enhanced tumor, whole tumor, and tumor core were respectively 79.90 %, 89.63 %, and 85.89 % on the BraTS 2018 dataset, respectively 77.14 %, 89.58 %, and 83.33 % on the BraTS 2019 dataset, and respectively 77.80 %, 90.04 %, and 83.18 % on the BraTS 2020 dataset, and respectively 83.48 %, 90.70 %, and 88.94 % on the BraTS 2021 dataset offering performance on par with that of state-of-the-art methods with only 1.90 M parameters. In addition, our approach significantly reduced the requirements for experimental equipment, and the average time taken to segment one case was only 1.48 s; these two benefits rendered the proposed network intensely competitive for clinical practice.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  3. Fallahpoor M, Chakraborty S, Pradhan B, Faust O, Barua PD, Chegeni H, et al.
    Comput Methods Programs Biomed, 2024 Jan;243:107880.
    PMID: 37924769 DOI: 10.1016/j.cmpb.2023.107880
    Positron emission tomography/computed tomography (PET/CT) is increasingly used in oncology, neurology, cardiology, and emerging medical fields. The success stems from the cohesive information that hybrid PET/CT imaging offers, surpassing the capabilities of individual modalities when used in isolation for different malignancies. However, manual image interpretation requires extensive disease-specific knowledge, and it is a time-consuming aspect of physicians' daily routines. Deep learning algorithms, akin to a practitioner during training, extract knowledge from images to facilitate the diagnosis process by detecting symptoms and enhancing images. This acquired knowledge aids in supporting the diagnosis process through symptom detection and image enhancement. The available review papers on PET/CT imaging have a drawback as they either included additional modalities or examined various types of AI applications. However, there has been a lack of comprehensive investigation specifically focused on the highly specific use of AI, and deep learning, on PET/CT images. This review aims to fill that gap by investigating the characteristics of approaches used in papers that employed deep learning for PET/CT imaging. Within the review, we identified 99 studies published between 2017 and 2022 that applied deep learning to PET/CT images. We also identified the best pre-processing algorithms and the most effective deep learning models reported for PET/CT while highlighting the current limitations. Our review underscores the potential of deep learning (DL) in PET/CT imaging, with successful applications in lesion detection, tumor segmentation, and disease classification in both sinogram and image spaces. Common and specific pre-processing techniques are also discussed. DL algorithms excel at extracting meaningful features, and enhancing accuracy and efficiency in diagnosis. However, limitations arise from the scarcity of annotated datasets and challenges in explainability and uncertainty. Recent DL models, such as attention-based models, generative models, multi-modal models, graph convolutional networks, and transformers, are promising for improving PET/CT studies. Additionally, radiomics has garnered attention for tumor classification and predicting patient outcomes. Ongoing research is crucial to explore new applications and improve the accuracy of DL models in this rapidly evolving field.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  4. Rahman H, Khan AR, Sadiq T, Farooqi AH, Khan IU, Lim WH
    Tomography, 2023 Dec 05;9(6):2158-2189.
    PMID: 38133073 DOI: 10.3390/tomography9060169
    Computed tomography (CT) is used in a wide range of medical imaging diagnoses. However, the reconstruction of CT images from raw projection data is inherently complex and is subject to artifacts and noise, which compromises image quality and accuracy. In order to address these challenges, deep learning developments have the potential to improve the reconstruction of computed tomography images. In this regard, our research aim is to determine the techniques that are used for 3D deep learning in CT reconstruction and to identify the training and validation datasets that are accessible. This research was performed on five databases. After a careful assessment of each record based on the objective and scope of the study, we selected 60 research articles for this review. This systematic literature review revealed that convolutional neural networks (CNNs), 3D convolutional neural networks (3D CNNs), and deep learning reconstruction (DLR) were the most suitable deep learning algorithms for CT reconstruction. Additionally, two major datasets appropriate for training and developing deep learning systems were identified: 2016 NIH-AAPM-Mayo and MSCT. These datasets are important resources for the creation and assessment of CT reconstruction models. According to the results, 3D deep learning may increase the effectiveness of CT image reconstruction, boost image quality, and lower radiation exposure. By using these deep learning approaches, CT image reconstruction may be made more precise and effective, improving patient outcomes, diagnostic accuracy, and healthcare system productivity.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  5. Pathan RK, Biswas M, Yasmin S, Khandaker MU, Salman M, Youssef AAF
    Sci Rep, 2023 Oct 09;13(1):16975.
    PMID: 37813932 DOI: 10.1038/s41598-023-43852-x
    Sign Language Recognition is a breakthrough for communication among deaf-mute society and has been a critical research topic for years. Although some of the previous studies have successfully recognized sign language, it requires many costly instruments including sensors, devices, and high-end processing power. However, such drawbacks can be easily overcome by employing artificial intelligence-based techniques. Since, in this modern era of advanced mobile technology, using a camera to take video or images is much easier, this study demonstrates a cost-effective technique to detect American Sign Language (ASL) using an image dataset. Here, "Finger Spelling, A" dataset has been used, with 24 letters (except j and z as they contain motion). The main reason for using this dataset is that these images have a complex background with different environments and scene colors. Two layers of image processing have been used: in the first layer, images are processed as a whole for training, and in the second layer, the hand landmarks are extracted. A multi-headed convolutional neural network (CNN) model has been proposed and tested with 30% of the dataset to train these two layers. To avoid the overfitting problem, data augmentation and dynamic learning rate reduction have been used. With the proposed model, 98.981% test accuracy has been achieved. It is expected that this study may help to develop an efficient human-machine communication system for a deaf-mute society.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  6. Chong JWR, Khoo KS, Chew KW, Vo DN, Balakrishnan D, Banat F, et al.
    Bioresour Technol, 2023 Feb;369:128418.
    PMID: 36470491 DOI: 10.1016/j.biortech.2022.128418
    The identification of microalgae species is an important tool in scientific research and commercial application to prevent harmful algae blooms (HABs) and recognizing potential microalgae strains for the bioaccumulation of valuable bioactive ingredients. The aim of this study is to incorporate rapid, high-accuracy, reliable, low-cost, simple, and state-of-the-art identification methods. Thus, increasing the possibility for the development of potential recognition applications, that could identify toxic-producing and valuable microalgae strains. Recently, deep learning (DL) has brought the study of microalgae species identification to a much higher depth of efficiency and accuracy. In doing so, this review paper emphasizes the significance of microalgae identification, and various forms of machine learning algorithms for image classification, followed by image pre-processing techniques, feature extraction, and selection for further classification accuracy. Future prospects over the challenges and improvements of potential DL classification model development, application in microalgae recognition, and image capturing technologies are discussed accordingly.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  7. Al-Masni MA, Lee S, Al-Shamiri AK, Gho SM, Choi YH, Kim DH
    Comput Biol Med, 2023 Feb;153:106553.
    PMID: 36641933 DOI: 10.1016/j.compbiomed.2023.106553
    Patient movement during Magnetic Resonance Imaging (MRI) scan can cause severe degradation of image quality. In Susceptibility Weighted Imaging (SWI), several echoes are typically measured during a single repetition period, where the earliest echoes show less contrast between various tissues, while the later echoes are more susceptible to artifacts and signal dropout. In this paper, we propose a knowledge interaction paradigm that jointly learns feature details from multiple distorted echoes by sharing their knowledge with unified training parameters, thereby simultaneously reducing motion artifacts of all echoes. This is accomplished by developing a new scheme that boosts a Single Encoder with Multiple Decoders (SEMD), which assures that the generated features not only get fused but also learned together. We called the proposed method Knowledge Interaction Learning between Multi-Echo data (KIL-ME-based SEMD). The proposed KIL-ME-based SEMD allows to share information and gain an understanding of the correlations between the multiple echoes. The main purpose of this work is to correct the motion artifacts and maintain image quality and structure details of all motion-corrupted echoes towards generating high-resolution susceptibility enhanced contrast images, i.e., SWI, using a weighted average of multi-echo motion-corrected acquisitions. We also compare various potential strategies that might be used to address the problem of reducing artifacts in multi-echoes data. The experimental results demonstrate the feasibility and effectiveness of the proposed method, reducing the severity of motion artifacts and improving the overall clinical image quality of all echoes with their associated SWI maps. Significant improvement of image quality is observed using both motion-simulated test data and actual volunteer data with various motion severity strengths. Eventually, by enhancing the overall image quality, the proposed network can increase the effectiveness of the physicians' capability to evaluate and correctly diagnose brain MR images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  8. Chong JWR, Khoo KS, Chew KW, Ting HY, Show PL
    Biotechnol Adv, 2023;63:108095.
    PMID: 36608745 DOI: 10.1016/j.biotechadv.2023.108095
    Identification of microalgae species is of importance due to the uprising of harmful algae blooms affecting both the aquatic habitat and human health. Despite this occurence, microalgae have been identified as a green biomass and alternative source due to its promising bioactive compounds accumulation that play a significant role in many industrial applications. Recently, microalgae species identification has been conducted through DNA analysis and various microscopy techniques such as light, scanning electron, transmission electron, and atomic force -microscopy. The aforementioned procedures have encouraged researchers to consider alternate ways due to limitations such as costly validation, requiring skilled taxonomists, prolonged analysis, and low accuracy. This review highlights the potential innovations in digital microscopy with the incorporation of both hardware and software that can produce a reliable recognition, detection, enumeration, and real-time acquisition of microalgae species. Several steps such as image acquisition, processing, feature extraction, and selection are discussed, for the purpose of generating high image quality by removing unwanted artifacts and noise from the background. These steps of identification of microalgae species is performed by reliable image classification through machine learning as well as deep learning algorithms such as artificial neural networks, support vector machines, and convolutional neural networks. Overall, this review provides comprehensive insights into numerous possibilities of microalgae image identification, image pre-processing, and machine learning techniques to address the challenges in developing a robust digital classification tool for the future.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  9. Abisha S, Mutawa AM, Murugappan M, Krishnan S
    PLoS One, 2023;18(4):e0284021.
    PMID: 37018344 DOI: 10.1371/journal.pone.0284021
    Different diseases are observed in vegetables, fruits, cereals, and commercial crops by farmers and agricultural experts. Nonetheless, this evaluation process is time-consuming, and initial symptoms are primarily visible at microscopic levels, limiting the possibility of an accurate diagnosis. This paper proposes an innovative method for identifying and classifying infected brinjal leaves using Deep Convolutional Neural Networks (DCNN) and Radial Basis Feed Forward Neural Networks (RBFNN). We collected 1100 images of brinjal leaf disease that were caused by five different species (Pseudomonas solanacearum, Cercospora solani, Alternaria melongenea, Pythium aphanidermatum, and Tobacco Mosaic Virus) and 400 images of healthy leaves from India's agricultural form. First, the original plant leaf is preprocessed by a Gaussian filter to reduce the noise and improve the quality of the image through image enhancement. A segmentation method based on expectation and maximization (EM) is then utilized to segment the leaf's-diseased regions. Next, the discrete Shearlet transform is used to extract the main features of the images such as texture, color, and structure, which are then merged to produce vectors. Lastly, DCNN and RBFNN are used to classify brinjal leaves based on their disease types. The DCNN achieved a mean accuracy of 93.30% (with fusion) and 76.70% (without fusion) compared to the RBFNN (82%-without fusion, 87%-with fusion) in classifying leaf diseases.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  10. Shoaib MA, Chuah JH, Ali R, Hasikin K, Khalil A, Hum YC, et al.
    Comput Intell Neurosci, 2023;2023:4208231.
    PMID: 36756163 DOI: 10.1155/2023/4208231
    Cardiac health diseases are one of the key causes of death around the globe. The number of heart patients has considerably increased during the pandemic. Therefore, it is crucial to assess and analyze the medical and cardiac images. Deep learning architectures, specifically convolutional neural networks have profoundly become the primary choice for the assessment of cardiac medical images. The left ventricle is a vital part of the cardiovascular system where the boundary and size perform a significant role in the evaluation of cardiac function. Due to automatic segmentation and good promising results, the left ventricle segmentation using deep learning has attracted a lot of attention. This article presents a critical review of deep learning methods used for the left ventricle segmentation from frequently used imaging modalities including magnetic resonance images, ultrasound, and computer tomography. This study also demonstrates the details of the network architecture, software, and hardware used for training along with publicly available cardiac image datasets and self-prepared dataset details incorporated. The summary of the evaluation matrices with results used by different researchers is also presented in this study. Finally, all this information is summarized and comprehended in order to assist the readers to understand the motivation and methodology of various deep learning models, as well as exploring potential solutions to future challenges in LV segmentation.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  11. Oyelade ON, Ezugwu AE, Almutairi MS, Saha AK, Abualigah L, Chiroma H
    Sci Rep, 2022 Apr 13;12(1):6166.
    PMID: 35418566 DOI: 10.1038/s41598-022-09929-9
    Deep learning (DL) models are becoming pervasive and applicable to computer vision, image processing, and synthesis problems. The performance of these models is often improved through architectural configuration, tweaks, the use of enormous training data, and skillful selection of hyperparameters. The application of deep learning models to medical image processing has yielded interesting performance, capable of correctly detecting abnormalities in medical digital images, making them surpass human physicians. However, advancing research in this domain largely relies on the availability of training datasets. These datasets are sometimes not publicly accessible, insufficient for training, and may also be characterized by a class imbalance among samples. As a result, inadequate training samples and difficulty in accessing new datasets for training deep learning models limit performance and research into new domains. Hence, generative adversarial networks (GANs) have been proposed to mediate this gap by synthesizing data similar to real sample images. However, we observed that benchmark datasets with regions of interest (ROIs) for characterizing abnormalities in breast cancer using digital mammography do not contain sufficient data with a fair distribution of all cases of abnormalities. For instance, the architectural distortion and breast asymmetry in digital mammograms are sparsely distributed across most publicly available datasets. This paper proposes a GAN model, named ROImammoGAN, which synthesizes ROI-based digital mammograms. Our approach involves the design of a GAN model consisting of both a generator and a discriminator to learn a hierarchy of representations for abnormalities in digital mammograms. Attention is given to architectural distortion, asymmetry, mass, and microcalcification abnormalities so that training distinctively learns the features of each abnormality and generates sufficient images for each category. The proposed GAN model was applied to MIAS datasets, and the performance evaluation yielded a competitive accuracy for the synthesized samples. In addition, the quality of the images generated was also evaluated using PSNR, SSIM, FSIM, BRISQUE, PQUE, NIQUE, FID, and geometry scores. The results showed that ROImammoGAN performed competitively with state-of-the-art GANs. The outcome of this study is a model for augmenting CNN models with ROI-centric image samples for the characterization of abnormalities in breast images.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  12. Shamim S, Awan MJ, Mohd Zain A, Naseem U, Mohammed MA, Garcia-Zapirain B
    J Healthc Eng, 2022;2022:6566982.
    PMID: 35422980 DOI: 10.1155/2022/6566982
    The coronavirus (COVID-19) pandemic has had a terrible impact on human lives globally, with far-reaching consequences for the health and well-being of many people around the world. Statistically, 305.9 million people worldwide tested positive for COVID-19, and 5.48 million people died due to COVID-19 up to 10 January 2022. CT scans can be used as an alternative to time-consuming RT-PCR testing for COVID-19. This research work proposes a segmentation approach to identifying ground glass opacity or ROI in CT images developed by coronavirus, with a modified structure of the Unet model having been used to classify the region of interest at the pixel level. The problem with segmentation is that the GGO often appears indistinguishable from a healthy lung in the initial stages of COVID-19, and so, to cope with this, the increased set of weights in contracting and expanding the Unet path and an improved convolutional module is added in order to establish the connection between the encoder and decoder pipeline. This has a major capacity to segment the GGO in the case of COVID-19, with the proposed model being referred to as "convUnet." The experiment was performed on the Medseg1 dataset, and the addition of a set of weights at each layer of the model and modification in the connected module in Unet led to an improvement in overall segmentation results. The quantitative results obtained using accuracy, recall, precision, dice-coefficient, F1score, and IOU were 93.29%, 93.01%, 93.67%, 92.46%, 93.34%, 86.96%, respectively, which is better than that obtained using Unet and other state-of-the-art models. Therefore, this segmentation approach proved to be more accurate, fast, and reliable in helping doctors to diagnose COVID-19 quickly and efficiently.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  13. Ninomiya K, Arimura H, Chan WY, Tanaka K, Mizuno S, Muhammad Gowdh NF, et al.
    PLoS One, 2021;16(1):e0244354.
    PMID: 33428651 DOI: 10.1371/journal.pone.0244354
    OBJECTIVES: To propose a novel robust radiogenomics approach to the identification of epidermal growth factor receptor (EGFR) mutations among patients with non-small cell lung cancer (NSCLC) using Betti numbers (BNs).

    MATERIALS AND METHODS: Contrast enhanced computed tomography (CT) images of 194 multi-racial NSCLC patients (79 EGFR mutants and 115 wildtypes) were collected from three different countries using 5 manufacturers' scanners with a variety of scanning parameters. Ninety-nine cases obtained from the University of Malaya Medical Centre (UMMC) in Malaysia were used for training and validation procedures. Forty-one cases collected from the Kyushu University Hospital (KUH) in Japan and fifty-four cases obtained from The Cancer Imaging Archive (TCIA) in America were used for a test procedure. Radiomic features were obtained from BN maps, which represent topologically invariant heterogeneous characteristics of lung cancer on CT images, by applying histogram- and texture-based feature computations. A BN-based signature was determined using support vector machine (SVM) models with the best combination of features that maximized a robustness index (RI) which defined a higher total area under receiver operating characteristics curves (AUCs) and lower difference of AUCs between the training and the validation. The SVM model was built using the signature and optimized in a five-fold cross validation. The BN-based model was compared to conventional original image (OI)- and wavelet-decomposition (WD)-based models with respect to the RI between the validation and the test.

    RESULTS: The BN-based model showed a higher RI of 1.51 compared with the models based on the OI (RI: 1.33) and the WD (RI: 1.29).

    CONCLUSION: The proposed model showed higher robustness than the conventional models in the identification of EGFR mutations among NSCLC patients. The results suggested the robustness of the BN-based approach against variations in image scanner/scanning parameters.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  14. Chen Z, Rajamanickam L, Cao J, Zhao A, Hu X
    PLoS One, 2021;16(12):e0260758.
    PMID: 34879097 DOI: 10.1371/journal.pone.0260758
    This study aims to solve the overfitting problem caused by insufficient labeled images in the automatic image annotation field. We propose a transfer learning model called CNN-2L that incorporates the label localization strategy described in this study. The model consists of an InceptionV3 network pretrained on the ImageNet dataset and a label localization algorithm. First, the pretrained InceptionV3 network extracts features from the target dataset that are used to train a specific classifier and fine-tune the entire network to obtain an optimal model. Then, the obtained model is used to derive the probabilities of the predicted labels. For this purpose, we introduce a squeeze and excitation (SE) module into the network architecture that augments the useful feature information, inhibits useless feature information, and conducts feature reweighting. Next, we perform label localization to obtain the label probabilities and determine the final label set for each image. During this process, the number of labels must be determined. The optimal K value is obtained experimentally and used to determine the number of predicted labels, thereby solving the empty label set problem that occurs when the predicted label values of images are below a fixed threshold. Experiments on the Corel5k multilabel image dataset verify that CNN-2L improves the labeling precision by 18% and 15% compared with the traditional multiple-Bernoulli relevance model (MBRM) and joint equal contribution (JEC) algorithms, respectively, and it improves the recall by 6% compared with JEC. Additionally, it improves the precision by 20% and 11% compared with the deep learning methods Weight-KNN and adaptive hypergraph learning (AHL), respectively. Although CNN-2L fails to improve the recall compared with the semantic extension model (SEM), it improves the comprehensive index of the F1 value by 1%. The experimental results reveal that the proposed transfer learning model based on a label localization strategy is effective for automatic image annotation and substantially boosts the multilabel image annotation performance.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  15. Aly CA, Abas FS, Ann GH
    Sci Prog, 2021;104(2):368504211005480.
    PMID: 33913378 DOI: 10.1177/00368504211005480
    INTRODUCTION: Action recognition is a challenging time series classification task that has received much attention in the recent past due to its importance in critical applications, such as surveillance, visual behavior study, topic discovery, security, and content retrieval.

    OBJECTIVES: The main objective of the research is to develop a robust and high-performance human action recognition techniques. A combination of local and holistic feature extraction methods used through analyzing the most effective features to extract to reach the objective, followed by using simple and high-performance machine learning algorithms.

    METHODS: This paper presents three robust action recognition techniques based on a series of image analysis methods to detect activities in different scenes. The general scheme architecture consists of shot boundary detection, shot frame rate re-sampling, and compact feature vector extraction. This process is achieved by emphasizing variations and extracting strong patterns in feature vectors before classification.

    RESULTS: The proposed schemes are tested on datasets with cluttered backgrounds, low- or high-resolution videos, different viewpoints, and different camera motion conditions, namely, the Hollywood-2, KTH, UCF11 (YouTube actions), and Weizmann datasets. The proposed schemes resulted in highly accurate video analysis results compared to those of other works based on four widely used datasets. The First, Second, and Third Schemes provides recognition accuracies of 57.8%, 73.6%, and 52.0% on Hollywood2, 94.5%, 97.0%, and 59.3% on KTH, 94.5%, 95.6%, and 94.2% on UCF11, and 98.9%, 97.8% and 100% on Weizmann.

    CONCLUSION: Each of the proposed schemes provides high recognition accuracy compared to other state-of-art methods. Especially, the Second Scheme as it gives excellent comparable results to other benchmarked approaches.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  16. Usman OL, Muniyandi RC, Omar K, Mohamad M
    PLoS One, 2021;16(2):e0245579.
    PMID: 33630876 DOI: 10.1371/journal.pone.0245579
    Achieving biologically interpretable neural-biomarkers and features from neuroimaging datasets is a challenging task in an MRI-based dyslexia study. This challenge becomes more pronounced when the needed MRI datasets are collected from multiple heterogeneous sources with inconsistent scanner settings. This study presents a method of improving the biological interpretation of dyslexia's neural-biomarkers from MRI datasets sourced from publicly available open databases. The proposed system utilized a modified histogram normalization (MHN) method to improve dyslexia neural-biomarker interpretations by mapping the pixels' intensities of low-quality input neuroimages to range between the low-intensity region of interest (ROIlow) and high-intensity region of interest (ROIhigh) of the high-quality image. This was achieved after initial image smoothing using the Gaussian filter method with an isotropic kernel of size 4mm. The performance of the proposed smoothing and normalization methods was evaluated based on three image post-processing experiments: ROI segmentation, gray matter (GM) tissues volume estimations, and deep learning (DL) classifications using Computational Anatomy Toolbox (CAT12) and pre-trained models in a MATLAB working environment. The three experiments were preceded by some pre-processing tasks such as image resizing, labelling, patching, and non-rigid registration. Our results showed that the best smoothing was achieved at a scale value, σ = 1.25 with a 0.9% increment in the peak-signal-to-noise ratio (PSNR). Results from the three image post-processing experiments confirmed the efficacy of the proposed methods. Evidence emanating from our analysis showed that using the proposed MHN and Gaussian smoothing methods can improve comparability of image features and neural-biomarkers of dyslexia with a statistically significantly high disc similarity coefficient (DSC) index, low mean square error (MSE), and improved tissue volume estimations. After 10 repeated 10-fold cross-validation, the highest accuracy achieved by DL models is 94.7% at a 95% confidence interval (CI) level. Finally, our finding confirmed that the proposed MHN method significantly outperformed the normalization method of the state-of-the-art histogram matching.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  17. Said MA, Musarudin M, Zulkaffli NF
    Ann Nucl Med, 2020 Dec;34(12):884-891.
    PMID: 33141408 DOI: 10.1007/s12149-020-01543-x
    OBJECTIVE: 18F is the most extensively used radioisotope in current clinical practices of PET imaging. This selection is based on the several criteria of pure PET radioisotopes with an optimum half-life, and low positron energy that contributes to a smaller positron range. In addition to 18F, other radioisotopes such as 68Ga and 124I are currently gained much attention with the increase in interest in new PET tracers entering the clinical trials. This study aims to determine the minimal scan time per bed position (Tmin) for the 124I and 68Ga based on the quantitative differences in PET imaging of 68Ga and 124I relative to 18F.

    METHODS: The European Association of Nuclear Medicine (EANM) procedure guidelines version 2.0 for FDG-PET tumor imaging has adhered for this purpose. A NEMA2012/IEC2008 phantom was filled with tumor to background ratio of 10:1 with the activity concentration of 30 kBq/ml ± 10 and 3 kBq/ml ± 10% for each radioisotope. The phantom was scanned using different acquisition times per bed position (1, 5, 7, 10 and 15 min) to determine the Tmin. The definition of Tmin was performed using an image coefficient of variations (COV) of 15%.

    RESULTS: Tmin obtained for 18F, 68Ga and 124I were 3.08, 3.24 and 32.93 min, respectively. Quantitative analyses among 18F, 68Ga and 124I images were performed. Signal-to-noise ratio (SNR), contrast recovery coefficients (CRC), and visibility (VH) are the image quality parameters analysed in this study. Generally, 68Ga and 18F gave better image quality as compared to 124I for all the parameters studied.

    CONCLUSION: We have defined Tmin for 18F, 68Ga and 124I SPECT CT imaging based on NEMA2012/IEC2008 phantom imaging. Despite the long scanning time suggested by Tmin, improvement in the image quality is acquired especially for 124I. In clinical practice, the long acquisition time, nevertheless, may cause patient discomfort and motion artifact.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  18. Rajagopal H, Mokhtar N, Tengku Mohmed Noor Izam TF, Wan Ahmad WK
    PLoS One, 2020;15(5):e0233320.
    PMID: 32428043 DOI: 10.1371/journal.pone.0233320
    Image Quality Assessment (IQA) is essential for the accuracy of systems for automatic recognition of tree species for wood samples. In this study, a No-Reference IQA (NR-IQA), wood NR-IQA (WNR-IQA) metric was proposed to assess the quality of wood images. Support Vector Regression (SVR) was trained using Generalized Gaussian Distribution (GGD) and Asymmetric Generalized Gaussian Distribution (AGGD) features, which were measured for wood images. Meanwhile, the Mean Opinion Score (MOS) was obtained from the subjective evaluation. This was followed by a comparison between the proposed IQA metric, WNR-IQA, and three established NR-IQA metrics, namely Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), deepIQA, Deep Bilinear Convolutional Neural Networks (DB-CNN), and five Full Reference-IQA (FR-IQA) metrics known as MSSIM, SSIM, FSIM, IWSSIM, and GMSD. The proposed WNR-IQA metric, BRISQUE, deepIQA, DB-CNN, and FR-IQAs were then compared with MOS values to evaluate the performance of the automatic IQA metrics. As a result, the WNR-IQA metric exhibited a higher performance compared to BRISQUE, deepIQA, DB-CNN, and FR-IQA metrics. Highest quality images may not be routinely available due to logistic factors, such as dust, poor illumination, and hot environment present in the timber industry. Moreover, motion blur could occur due to the relative motion between the camera and the wood slice. Therefore, the advantage of WNR-IQA could be seen from its independency from a "perfect" reference image for the image quality evaluation.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  19. Al-Shabi M, Lan BL, Chan WY, Ng KH, Tan M
    Int J Comput Assist Radiol Surg, 2019 Oct;14(10):1815-1819.
    PMID: 31020576 DOI: 10.1007/s11548-019-01981-7
    PURPOSE: Lung nodules have very diverse shapes and sizes, which makes classifying them as benign/malignant a challenging problem. In this paper, we propose a novel method to predict the malignancy of nodules that have the capability to analyze the shape and size of a nodule using a global feature extractor, as well as the density and structure of the nodule using a local feature extractor.

    METHODS: We propose to use Residual Blocks with a 3 × 3 kernel size for local feature extraction and Non-Local Blocks to extract the global features. The Non-Local Block has the ability to extract global features without using a huge number of parameters. The key idea behind the Non-Local Block is to apply matrix multiplications between features on the same feature maps.

    RESULTS: We trained and validated the proposed method on the LIDC-IDRI dataset which contains 1018 computed tomography scans. We followed a rigorous procedure for experimental setup, namely tenfold cross-validation, and ignored the nodules that had been annotated by

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  20. Safdar A, Khan MA, Shah JH, Sharif M, Saba T, Rehman A, et al.
    Microsc Res Tech, 2019 Sep;82(9):1542-1556.
    PMID: 31209970 DOI: 10.1002/jemt.23320
    Plant diseases are accountable for economic losses in an agricultural country. The manual process of plant diseases diagnosis is a key challenge from last one decade; therefore, researchers in this area introduced automated systems. In this research work, automated system is proposed for citrus fruit diseases recognition using computer vision technique. The proposed method incorporates five fundamental steps such as preprocessing, disease segmentation, feature extraction and reduction, fusion, and classification. The noise is being removed followed by a contrast stretching procedure in the very first phase. Later, watershed method is applied to excerpt the infectious regions. The shape, texture, and color features are subsequently computed from these infection regions. In the fourth step, reduced features are fused using serial-based approach followed by a final step of classification using multiclass support vector machine. For dimensionality reduction, principal component analysis is utilized, which is a statistical procedure that enforces an orthogonal transformation on a set of observations. Three different image data sets (Citrus Image Gallery, Plant Village, and self-collected) are combined in this research to achieving a classification accuracy of 95.5%. From the stats, it is quite clear that our proposed method outperforms several existing methods with greater precision and accuracy.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links