Current breast cancer diagnosis methods often face limitations such as high cost, time consumption, and inter-observer variability. To address these challenges, this research proposes a novel deep learning framework that leverages generative adversarial networks (GANs) for data augmentation and transfer learning to enhance breast cancer classification using convolutional neural networks (CNNs). The framework uses a two-stage augmentation approach. First, a conditional Wasserstein GAN (cWGAN) generates synthetic breast cancer images based on clinical data, enhancing training stability and enabling targeted feature incorporation. Second, traditional augmentation techniques (e.g., rotation, flipping, cropping) are applied to both original and synthetic images. A multi-scale transfer learning technique is also employed, integrating three pre-trained CNNs (DenseNet-201, NasNetMobile, ResNet-101) with a multi-scale feature enrichment scheme, allowing the model to capture features at various scales. The framework was evaluated on the BreakHis dataset, achieving an accuracy of 99.2% for binary classification and 98.5% for multi-class classification, significantly outperforming existing methods. This framework offers a more efficient, cost-effective, and accurate approach for breast cancer diagnosis. Future work will focus on generalizing the framework to clinical datasets and integrating it into diagnostic workflows.
* Title and MeSH Headings from MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine.