A multimodal data-based model for breast cancer diagnosis
| dc.citation.volume | 279 | |
| dc.contributor.author | Wang H | |
| dc.contributor.author | Wei L | |
| dc.contributor.author | Li J | |
| dc.contributor.author | Liu B | |
| dc.contributor.author | Fang J | |
| dc.contributor.author | Mooney C | |
| dc.date.accessioned | 2026-03-11T22:19:59Z | |
| dc.date.issued | 2026-05-15 | |
| dc.description.abstract | Background and Objective: Developing multimodal data-driven diagnostic systems has become a key clinical strategy for improving breast cancer outcomes. However, effectively modeling multimodal features remains challenging due to substantial semantic heterogeneity, scale discrepancies, and the inherent difficulty of cross-modal alignment. Although existing studies have proposed various multimodal fusion methods, most rely on direct feature concatenation or shallow integration, which fail to capture fine-grained intra-modality semantics as well as the complex interactions between histopathological and genomic modalities. Methods: In this study, we propose a multimodal diagnostic framework based on Feature Enhancement and Semantic Collaborative Alignment (FESCA). The method incorporates a semantic-guided modality feature enhancement mechanism that effectively extracts and strengthens diagnostic cues from both pathological images and genomic data. In addition, a contrastive-learning-based cross-modal alignment strategy is introduced to map heterogeneous modalities into a unified semantic space and achieve deep semantic collaboration through contrastive optimization. To ensure robust breast cancer classification under varying modality availability, a multimodal collaborative diagnostic strategy is employed to dynamically adapt the feature representations. Results: We evaluate FESCA on the TCGA-BRCA dataset, and the experimental results demonstrate that it outperforms state-of-the-art methods in breast cancer classification while significantly improving both intra-modality representation quality and cross-modal semantic alignment. Conclusion: To enhance accessibility and practical application, we developed a web-based breast cancer pathological staging diagnosis system to visualize and deploy the FESCA model, demonstrating a step toward clinical application and providing a benchmark for other research methods. | |
| dc.description.confidential | false | |
| dc.identifier.citation | Wang H, Wei L, Li J, Liu B, Fang J, Mooney C. (2026). A multimodal data-based model for breast cancer diagnosis. Computer Methods and Programs in Biomedicine. 279. | |
| dc.identifier.doi | 10.1016/j.cmpb.2026.109288 | |
| dc.identifier.eissn | 1872-7565 | |
| dc.identifier.elements-type | journal-article | |
| dc.identifier.issn | 0169-2607 | |
| dc.identifier.number | 109288 | |
| dc.identifier.pii | S0169260726000568 | |
| dc.identifier.uri | https://mro.massey.ac.nz/handle/10179/74295 | |
| dc.language | English | |
| dc.publisher | Elsevier B V | |
| dc.publisher.uri | https://www.sciencedirect.com/science/article/pii/S0169260726000568 | |
| dc.relation.isPartOf | Computer Methods and Programs in Biomedicine | |
| dc.rights | (c) The author/s | en |
| dc.rights.license | CC BY 4.0 | en |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en |
| dc.subject | Diagnostic systems | |
| dc.subject | Contrastive learning | |
| dc.subject | Cross-modal learning | |
| dc.subject | Multimodal classification | |
| dc.title | A multimodal data-based model for breast cancer diagnosis | |
| dc.type | Journal article | |
| pubs.elements-id | 610083 | |
| pubs.organisational-group | Other |
