MULTI-STREAM DEEP LEARNING FOR BREAST CANCER CLASSIFICATION IN CONTRAST-ENHANCED SPECTRAL MAMMOGRAPHY

Main Article Content

Ahmed A. H. Alkurdi, Amira Bibo Sallow

Abstract

Breast cancer remains one of the leading causes of cancer-related mortality among women worldwide, necessitating early and accurate diagnosis to improve patient outcomes. Contrast-enhanced spectral mammography (CESM) has emerged as a promising imaging modality for detecting malignant lesions with enhanced sensitivity. However, automated breast cancer classification in CESM remains challenging due to the complexity of mammographic patterns and the need for multi-view analysis. In this study, a novel four-stream deep learning framework is proposed, that integrates multi-view mammographic imaging (craniocaudal and mediolateral oblique views), dual convolutional neural network (CNN) backbones (MobileNetV2 and EfficientNet-B0), and patient metadata (age and breast density) to enhance breast cancer classification. The proposed model extracts multi-scale, complementary features from both imaging perspectives, refines representations using multi-head attention and a token-based multilayer perceptron (MLP), and incorporates structured metadata for improved diagnostic accuracy.


The model was trained and evaluated on the Categorized Contrast-Enhanced Spectral Mammography (CDD-CESM) dataset, consisting of approximately 600 mammograms. The results demonstrate state-of-the-art performance, achieving an accuracy of 87.21% and an AUC-ROC of 0.9676, outperforming prior methods applied to the same dataset. The integration of metadata proved beneficial, allowing the model to learn clinically relevant correlations between mammographic patterns and patient risk factors. Additionally, the multi-view, multi-backbone approach provided a more comprehensive feature representation, leading to improved lesion characterization and class separability.

Article Details

Section
Articles