This study aims to explore an effective method to evaluate spatial cognitive ability, which can effectively extract and classify the feature of EEG signals collected from subjects participating in the virtual reality (VR) environment; and evaluate the training effect objectively and quantitatively to ensure the objectivity and accuracy of spatial cognition evaluation, according to the classification results. Therefore, a multi-dimensional conditional mutual information (MCMI) method is proposed, which could calculate the coupling strength of two channels considering the influence of other channels. The coupled characteristics of the multi-frequency combination were transformed into multi-spectral images, and the image data were classified employing the convolutional neural networks (CNN) model. The experimental results showed that the multi-spectral image transform features based on MCMI are better in classification than other methods, and among the classification results of six band combinations, the best classification accuracy of Beta1-Beta2-Gamma combination is 98.3%. The MCMI characteristics on the Beta1-Beta2-Gamma band combination can be a biological marker for the evaluation of spatial cognition. The proposed feature extraction method based on MCMI provides a new perspective for spatial cognitive ability assessment and analysis.
* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.