A multimodal data-based model for breast cancer diagnosis

dc.citation.volume279
dc.contributor.authorWang H
dc.contributor.authorWei L
dc.contributor.authorLi J
dc.contributor.authorLiu B
dc.contributor.authorFang J
dc.contributor.authorMooney C
dc.date.accessioned2026-03-11T22:19:59Z
dc.date.issued2026-05-15
dc.description.abstractBackground and Objective: Developing multimodal data-driven diagnostic systems has become a key clinical strategy for improving breast cancer outcomes. However, effectively modeling multimodal features remains challenging due to substantial semantic heterogeneity, scale discrepancies, and the inherent difficulty of cross-modal alignment. Although existing studies have proposed various multimodal fusion methods, most rely on direct feature concatenation or shallow integration, which fail to capture fine-grained intra-modality semantics as well as the complex interactions between histopathological and genomic modalities. Methods: In this study, we propose a multimodal diagnostic framework based on Feature Enhancement and Semantic Collaborative Alignment (FESCA). The method incorporates a semantic-guided modality feature enhancement mechanism that effectively extracts and strengthens diagnostic cues from both pathological images and genomic data. In addition, a contrastive-learning-based cross-modal alignment strategy is introduced to map heterogeneous modalities into a unified semantic space and achieve deep semantic collaboration through contrastive optimization. To ensure robust breast cancer classification under varying modality availability, a multimodal collaborative diagnostic strategy is employed to dynamically adapt the feature representations. Results: We evaluate FESCA on the TCGA-BRCA dataset, and the experimental results demonstrate that it outperforms state-of-the-art methods in breast cancer classification while significantly improving both intra-modality representation quality and cross-modal semantic alignment. Conclusion: To enhance accessibility and practical application, we developed a web-based breast cancer pathological staging diagnosis system to visualize and deploy the FESCA model, demonstrating a step toward clinical application and providing a benchmark for other research methods.
dc.description.confidentialfalse
dc.identifier.citationWang H, Wei L, Li J, Liu B, Fang J, Mooney C. (2026). A multimodal data-based model for breast cancer diagnosis. Computer Methods and Programs in Biomedicine. 279.
dc.identifier.doi10.1016/j.cmpb.2026.109288
dc.identifier.eissn1872-7565
dc.identifier.elements-typejournal-article
dc.identifier.issn0169-2607
dc.identifier.number109288
dc.identifier.piiS0169260726000568
dc.identifier.urihttps://mro.massey.ac.nz/handle/10179/74295
dc.languageEnglish
dc.publisherElsevier B V
dc.publisher.urihttps://www.sciencedirect.com/science/article/pii/S0169260726000568
dc.relation.isPartOfComputer Methods and Programs in Biomedicine
dc.rights(c) The author/sen
dc.rights.licenseCC BY 4.0en
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en
dc.subjectDiagnostic systems
dc.subjectContrastive learning
dc.subjectCross-modal learning
dc.subjectMultimodal classification
dc.titleA multimodal data-based model for breast cancer diagnosis
dc.typeJournal article
pubs.elements-id610083
pubs.organisational-groupOther

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
610083 PDF.pdf
Size:
4.24 MB
Format:
Adobe Portable Document Format
Description:
Published version.pdf

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
9.22 KB
Format:
Plain Text
Description:

Collections