Displaying all 6 publications

Abstract:
Sort:
  1. Wang B, Liu J, Xie J, Zhang X, Wang Z, Cao Z, et al.
    Clin Radiol, 2024 May 27.
    PMID: 38944542 DOI: 10.1016/j.crad.2024.05.016
    AIM: Radiomics involves the extraction of quantitative data from medical images to facilitate the diagnosis, prognosis, and staging of tumors. This study provides a comprehensive overview of the efficacy of radiomics in prognostic applications for head and neck cancer (HNC) in recent years. It undertakes a systematic review of prognostic models specific to HNC and conducts a meta-analysis to evaluate their predictive performance.

    MATERIALS AND METHODS: This study adhered rigorously to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for literature searches. The literature databases, including PubMed, Embase, Cochrane, and Scopus were systematically searched individually. The methodological quality of the incorporated studies underwent assessment utilizing the radiomics quality score (RQS) tool. A random-effects meta-analysis employing the Harrell concordance index (C-index) was conducted to evaluate the performance of all radiomics models.

    RESULTS: Among the 388 studies retrieved, 24 studies encompassing a total of 6,978 cases were incorporated into the systematic review. Furthermore, eight studies, focusing on overall survival as an endpoint, were included in the meta-analysis. The meta-analysis revealed that the estimated random effect of the C-index for all studies utilizing radiomics alone was 0.77 (0.71-0.82), with a substantial degree of heterogeneity indicated by an I2 of 80.17%.

    CONCLUSIONS: Based on this review, prognostic modeling utilizing radiomics has demonstrated enhanced efficacy for head and neck cancers; however, there remains room for improvement in this approach. In the future, advancements are warranted in the integration of clinical parameters and multimodal features, balancing multicenter data, as well as in feature screening and model construction within this field.

  2. Wen D, Cheng Z, Li J, Zheng X, Yao W, Dong X, et al.
    J Neurosci Methods, 2021 Nov 01;363:109353.
    PMID: 34492241 DOI: 10.1016/j.jneumeth.2021.109353
    BACKGROUND: The application of deep learning models to electroencephalogram (EEG) signal classification has recently become a popular research topic. Several deep learning models have been proposed to classify EEG signals in patients with various neurological diseases. However, no effective deep learning model for event-related potential (ERP) signal classification is yet available for amnestic mild cognitive impairment (aMCI) with type 2 diabetes mellitus (T2DM).

    METHOD: This study proposed a single-scale multi-input convolutional neural network (SSMICNN) method to classify ERP signals between aMCI patients with T2DM and the control group. Firstly, the 18-electrode ERP signal on alpha, beta, and theta frequency bands was extracted by using the fast Fourier transform, and then the mean, sum of squares, and absolute value feature of each frequency band were calculated. Finally, these three features are converted into multispectral images respectively and used as the input of the SSMICNN network to realize the classification task.

    RESULTS: The results show that the SSMICNN can fuse MSI formed by different features, SSMICNN enriches the feature quantity of the neural network input layer and has excellent robustness, and the errors of SSMICNN can be simultaneously transmitted to the three convolution channels in the back-propagation phase. Comparison with Existing Method(s): SSMICNN could more effectively identify ERP signals from aMCI with T2DM from the control group compared to existing classification methods, including convolution neural network, support vector machine, and logistic regression.

    CONCLUSIONS: The combination of SSMICNN and MSI can be used as an effective biological marker to distinguish aMCI patients with T2DM from the control group.

  3. Wen D, Li R, Jiang M, Li J, Liu Y, Dong X, et al.
    Neural Netw, 2021 Dec 25;148:23-36.
    PMID: 35051867 DOI: 10.1016/j.neunet.2021.12.010
    This study aims to explore an effective method to evaluate spatial cognitive ability, which can effectively extract and classify the feature of EEG signals collected from subjects participating in the virtual reality (VR) environment; and evaluate the training effect objectively and quantitatively to ensure the objectivity and accuracy of spatial cognition evaluation, according to the classification results. Therefore, a multi-dimensional conditional mutual information (MCMI) method is proposed, which could calculate the coupling strength of two channels considering the influence of other channels. The coupled characteristics of the multi-frequency combination were transformed into multi-spectral images, and the image data were classified employing the convolutional neural networks (CNN) model. The experimental results showed that the multi-spectral image transform features based on MCMI are better in classification than other methods, and among the classification results of six band combinations, the best classification accuracy of Beta1-Beta2-Gamma combination is 98.3%. The MCMI characteristics on the Beta1-Beta2-Gamma band combination can be a biological marker for the evaluation of spatial cognition. The proposed feature extraction method based on MCMI provides a new perspective for spatial cognitive ability assessment and analysis.
  4. Zhang X, Dong X, Saripan MIB, Du D, Wu Y, Wang Z, et al.
    Thorac Cancer, 2023 Jul;14(19):1802-1811.
    PMID: 37183577 DOI: 10.1111/1759-7714.14924
    BACKGROUND: Radiomic diagnosis models generally consider only a single dimension of information, leading to limitations in their diagnostic accuracy and reliability. The integration of multiple dimensions of information into the deep learning model have the potential to improve its diagnostic capabilities. The purpose of study was to evaluate the performance of deep learning model in distinguishing tuberculosis (TB) nodules and lung cancer (LC) based on deep learning features, radiomic features, and clinical information.

    METHODS: Positron emission tomography (PET) and computed tomography (CT) image data from 97 patients with LC and 77 patients with TB nodules were collected. One hundred radiomic features were extracted from both PET and CT imaging using the pyradiomics platform, and 2048 deep learning features were obtained through a residual neural network approach. Four models included traditional machine learning model with radiomic features as input (traditional radiomics), a deep learning model with separate input of image features (deep convolutional neural networks [DCNN]), a deep learning model with two inputs of radiomic features and deep learning features (radiomics-DCNN) and a deep learning model with inputs of radiomic features and deep learning features and clinical information (integrated model). The models were evaluated using area under the curve (AUC), sensitivity, accuracy, specificity, and F1-score metrics.

    RESULTS: The results of the classification of TB nodules and LC showed that the integrated model achieved an AUC of 0.84 (0.82-0.88), sensitivity of 0.85 (0.80-0.88), and specificity of 0.84 (0.83-0.87), performing better than the other models.

    CONCLUSION: The integrated model was found to be the best classification model in the diagnosis of TB nodules and solid LC.

Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links