MATERIALS AND METHOD: In this research, we employ an open-access EEG signal public dataset containing three distinct classes: AD, FD, and control subjects. We then constructed a newly proposed EEGConvNeXt model comprised of a 2-dimensional CNN algorithm that firstly converts the EEG signals into power spectrogram-based images. Secondly, these images were used as input for the proposed EEGConvNeXt model for automated classification of AD, FD, and a control outcome. The proposed EEGConvNeXt model is therefore a lightweight model that contributes to a new image classification CNN structure based on the transformer model with four primary stages: a stem, a main model, downsampling, and an output stem.
RESULTS: The EEGConvNeXt model achieved a classification accuracy of ∼95.70% for three-class detection (AD, FD, and control), validated using a hold-out strategy. Binary classification cases, such as AD versus FD and FD versus control, achieved accuracies exceeding 98%, demonstrating the model's robustness across scenarios.
CONCLUSIONS: The proposed EEGConvNeXt model demonstrates high classification performance with a lightweight architecture suitable for deployment in resource-constrained settings. While the study establishes a novel framework for AD and FD detection, limitations include reliance on a relatively small dataset and the need for further validation on diverse populations. Future research should focus on expanding datasets, optimizing architecture, and exploring additional neurological disorders to enhance the model's utility in clinical applications.
METHODS: A total of 1447 ultrasound images, including 767 benign masses and 680 malignant masses were acquired from a tertiary hospital. A semi-supervised GAN model was developed to augment the breast ultrasound images. The synthesized images were subsequently used to classify breast masses using a convolutional neural network (CNN). The model was validated using a 5-fold cross-validation method.
RESULTS: The proposed GAN architecture generated high-quality breast ultrasound images, verified by two experienced radiologists. The improved performance of semi-supervised learning increased the quality of the synthetic data produced in comparison to the baseline method. We achieved more accurate breast mass classification results (accuracy 90.41%, sensitivity 87.94%, specificity 85.86%) with our synthetic data augmentation compared to other state-of-the-art methods.
CONCLUSION: The proposed radiomics model has demonstrated a promising potential to synthesize and classify breast masses on ultrasound in a semi-supervised manner.