Displaying all 3 publications

Abstract:
Sort:
  1. Khan Z, Yahya N, Alsaih K, Ali SSA, Meriaudeau F
    Sensors (Basel), 2020 Jun 03;20(11).
    PMID: 32503330 DOI: 10.3390/s20113183
    In this paper, we present an evaluation of four encoder-decoder CNNs in the segmentation of the prostate gland in T2W magnetic resonance imaging (MRI) image. The four selected CNNs are FCN, SegNet, U-Net, and DeepLabV3+, which was originally proposed for the segmentation of road scene, biomedical, and natural images. Segmentation of prostate in T2W MRI images is an important step in the automatic diagnosis of prostate cancer to enable better lesion detection and staging of prostate cancer. Therefore, many research efforts have been conducted to improve the segmentation of the prostate gland in MRI images. The main challenges of prostate gland segmentation are blurry prostate boundary and variability in prostate anatomical structure. In this work, we investigated the performance of encoder-decoder CNNs for segmentation of prostate gland in T2W MRI. Image pre-processing techniques including image resizing, center-cropping and intensity normalization are applied to address the issues of inter-patient and inter-scanner variability as well as the issue of dominating background pixels over prostate pixels. In addition, to enrich the network with more data, to increase data variation, and to improve its accuracy, patch extraction and data augmentation are applied prior to training the networks. Furthermore, class weight balancing is used to avoid having biased networks since the number of background pixels is much higher than the prostate pixels. The class imbalance problem is solved by utilizing weighted cross-entropy loss function during the training of the CNN model. The performance of the CNNs is evaluated in terms of the Dice similarity coefficient (DSC) and our experimental results show that patch-wise DeepLabV3+ gives the best performance with DSC equal to 92 . 8 % . This value is the highest DSC score compared to the FCN, SegNet, and U-Net that also competed the recently published state-of-the-art method of prostate segmentation.
  2. Alsaih K, Yusoff MZ, Tang TB, Faye I, Mériaudeau F
    Comput Methods Programs Biomed, 2020 Oct;195:105566.
    PMID: 32504911 DOI: 10.1016/j.cmpb.2020.105566
    BACKGROUND AND OBJECTIVES: Aged people usually are more to be diagnosed with retinal diseases in developed countries. Retinal capillaries leakage into the retina swells and causes an acute vision loss, which is called age-related macular degeneration (AMD). The disease can not be adequately diagnosed solely using fundus images as depth information is not available. The variations in retina volume assist in monitoring ophthalmological abnormalities. Therefore, high-fidelity AMD segmentation in optical coherence tomography (OCT) imaging modality has raised the attention of researchers as well as those of the medical doctors. Many methods across the years encompassing machine learning approaches and convolutional neural networks (CNN) strategies have been proposed for object detection and image segmentation.

    METHODS: In this paper, we analyze four wide-spread deep learning models designed for the segmentation of three retinal fluids outputting dense predictions in the RETOUCH challenge data. We aim to demonstrate how a patch-based approach could push the performance for each method. Besides, we also evaluate the methods using the OPTIMA challenge dataset for generalizing network performance. The analysis is driven into two sections: the comparison between the four approaches and the significance of patching the images.

    RESULTS: The performance of networks trained on the RETOUCH dataset is higher than human performance. The analysis further generalized the performance of the best network obtained by fine-tuning it and achieved a mean Dice similarity coefficient (DSC) of 0.85. Out of the three types of fluids, intraretinal fluid (IRF) is more recognized, and the highest DSC value of 0.922 is achieved using Spectralis dataset. Additionally, the highest average DSC score is 0.84, which is achieved by PaDeeplabv3+ model using Cirrus dataset.

    CONCLUSIONS: The proposed method segments the three fluids in the retina with high DSC value. Fine-tuning the networks trained on the RETOUCH dataset makes the network perform better and faster than training from scratch. Enriching the networks with inputting a variety of shapes by extracting patches helped to segment the fluids better than using a full image.

  3. Alsaih K, Lemaitre G, Rastgoo M, Massich J, Sidibé D, Meriaudeau F
    Biomed Eng Online, 2017 Jun 07;16(1):68.
    PMID: 28592309 DOI: 10.1186/s12938-017-0352-9
    BACKGROUND: Spectral domain optical coherence tomography (OCT) (SD-OCT) is most widely imaging equipment used in ophthalmology to detect diabetic macular edema (DME). Indeed, it offers an accurate visualization of the morphology of the retina as well as the retina layers.

    METHODS: The dataset used in this study has been acquired by the Singapore Eye Research Institute (SERI), using CIRRUS TM (Carl Zeiss Meditec, Inc., Dublin, CA, USA) SD-OCT device. The dataset consists of 32 OCT volumes (16 DME and 16 normal cases). Each volume contains 128 B-scans with resolution of 1024 px × 512 px, resulting in more than 3800 images being processed. All SD-OCT volumes are read and assessed by trained graders and identified as normal or DME cases based on evaluation of retinal thickening, hard exudates, intraretinal cystoid space formation, and subretinal fluid. Within the DME sub-set, a large number of lesions has been selected to create a rather complete and diverse DME dataset. This paper presents an automatic classification framework for SD-OCT volumes in order to identify DME versus normal volumes. In this regard, a generic pipeline including pre-processing, feature detection, feature representation, and classification was investigated. More precisely, extraction of histogram of oriented gradients and local binary pattern (LBP) features within a multiresolution approach is used as well as principal component analysis (PCA) and bag of words (BoW) representations.

    RESULTS AND CONCLUSION: Besides comparing individual and combined features, different representation approaches and different classifiers are evaluated. The best results are obtained for LBP[Formula: see text] vectors while represented and classified using PCA and a linear-support vector machine (SVM), leading to a sensitivity(SE) and specificity (SP) of 87.5 and 87.5%, respectively.

Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links