Alzheimer's disease (AD) is a neurodegenerative disease characterized by various pathological changes. Utilizing multimodal data from Fluorodeoxyglucose positron emission tomography(FDG-PET) and Magnetic Resonance Imaging(MRI) of the brain can offer comprehensive information about the lesions from different perspectives and improve the accuracy of prediction. However, there are significant differences in the feature space of multimodal data. Commonly, the simple concatenation of multimodal features can cause the model to struggle in distinguishing and utilizing the complementary information between different modalities, thus affecting the accuracy of predictions. Therefore, we propose an AD prediction model based on de-correlation constraint and multi-modal feature interaction. This model consists of the following three parts: (1) The feature extractor employs residual connections and attention mechanisms to capture distinctive lesion features from FDG-PET and MRI data within their respective modalities. (2) The de-correlation constraint function enhances the model's capacity to extract complementary information from different modalities by reducing the feature similarity between them. (3) The mutual attention feature fusion module interacts with the features within and between modalities to enhance the modal-specific features and adaptively adjust the weights of these features based on information from other modalities. The experimental results on ADNI database demonstrate that the proposed model achieves a prediction accuracy of 86.79% for AD, MCI and NC, which is higher than the existing multi-modal AD prediction models.
* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.