SCHEDULED SYSTEM MAINTENANCE – Monday 6 October to Tuesday 7 October 2025. We expect no disruption to services. For further assistance please contact the Library team, library@massey.ac.nz
Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    New user? Click here to register using a personal email and password.Have you forgotten your password?
Repository logo
    Info Pages
    Content PolicyCopyright & Access InfoDepositing to MRODeposit LicenseDeposit License SummaryFile FormatsTheses FAQDoctoral Thesis Deposit
  • Communities & Collections
  • All of MRO
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    New user? Click here to register using a personal email and password.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Yang B"

Now showing 1 - 3 of 3
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    DiffusionDCI: A Novel Diffusion-Based Unified Framework for Dynamic Full-Field OCT Image Generation and Segmentation
    (IEEE Access, 2024) Yang B; Li J; Wang J; Li R; Gu K; Liu B; Militello C
    Rapid and accurate identification of cancerous areas during surgery is crucial for guiding surgical procedures and reducing postoperative recurrence rates. Dynamic Cell Imaging (DCI) has emerged as a promising alternative to traditional frozen section pathology, offering high-resolution displays of tissue structures and cellular characteristics. However, challenges persist in segmenting DCI images using deep learning methods, such as color variation and artifacts between patches in whole slide DCI images, and the difficulty in obtaining precise annotated data. In this paper, we introduce a novel two-stage framework for DCI image generation and segmentation. Initially, the Dual Semantic Diffusion Model (DSDM) is specifically designed to generate high-quality and semantically relevant DCI images. These images not only serve as an effective means of data augmentation to assist downstream segmentation tasks but also help in reducing the reliance on expensive and hard-to-obtain large annotated medical image datasets. Furthermore, we reuse the pretrained DSDM to extract diffusion features, which are then infused into the segmentation network via a cross-attention alignment module. This approach enables our network to capture and utilize the characteristics of DCI images more effectively, thereby significantly enhancing segmentation results. Our method was validated on the DCI dataset and compared with other methods for image generation and segmentation. Experimental results demonstrate that our method achieves superior performance in both tasks, proving the effectiveness of the proposed model.
  • Loading...
    Thumbnail Image
    Item
    Potential rapid intraoperative cancer diagnosis using dynamic full-field optical coherence tomography and deep learning: A prospective cohort study in breast cancer patients
    (Elsevier B V on behalf of the Science China Press, 2024-06-15) Zhang S; Yang B; Yang H; Zhao J; Zhang Y; Gao Y; Monteiro O; Zhang K; Liu B; Wang S
    An intraoperative diagnosis is critical for precise cancer surgery. However, traditional intraoperative assessments based on hematoxylin and eosin (H&E) histology, such as frozen section, are time-, resource-, and labor-intensive, and involve specimen-consuming concerns. Here, we report a near-real-time automated cancer diagnosis workflow for breast cancer that combines dynamic full-field optical coherence tomography (D-FFOCT), a label-free optical imaging method, and deep learning for bedside tumor diagnosis during surgery. To classify the benign and malignant breast tissues, we conducted a prospective cohort trial. In the modeling group (n = 182), D-FFOCT images were captured from April 26 to June 20, 2018, encompassing 48 benign lesions, 114 invasive ductal carcinoma (IDC), 10 invasive lobular carcinoma, 4 ductal carcinoma in situ (DCIS), and 6 rare tumors. Deep learning model was built up and fine-tuned in 10,357 D-FFOCT patches. Subsequently, from June 22 to August 17, 2018, independent tests (n = 42) were conducted on 10 benign lesions, 29 IDC, 1 DCIS, and 2 rare tumors. The model yielded excellent performance, with an accuracy of 97.62%, sensitivity of 96.88% and specificity of 100%; only one IDC was misclassified. Meanwhile, the acquisition of the D-FFOCT images was non-destructive and did not require any tissue preparation or staining procedures. In the simulated intraoperative margin evaluation procedure, the time required for our novel workflow (approximately 3 min) was significantly shorter than that required for traditional procedures (approximately 30 min). These findings indicate that the combination of D-FFOCT and deep learning algorithms can streamline intraoperative cancer diagnosis independently of traditional pathology laboratory procedures.
  • Loading...
    Thumbnail Image
    Item
    Transformer-based multiple instance learning network with 2D positional encoding for histopathology image classification
    (Springer Nature Switzerland AG, 2025-05) Yang B; Ding L; Li J; Li Y; Qu G; Wang J; Wang Q; Liu B
    Digital medical imaging, particularly pathology images, is essential for cancer diagnosis but faces challenges in direct model training due to its super-resolution nature. Although weakly supervised learning has reduced the need for manual annotations, many multiple instance learning (MIL) methods struggle to effectively capture crucial spatial relationships in histopathological images. Existing methods incorporating positional information often overlook nuanced spatial correlations or use positional encoding strategies that do not fully capture the unique spatial dynamics of pathology images. To address this issue, we propose a new framework named TMIL (Transformer-based Multiple Instance Learning Network with 2D positional encoding), which leverages multiple instance learning for weakly supervised classification of histopathological images. TMIL incorporates a 2D positional encoding module, based on the Transformer, to model positional information and explore correlations between instances. Furthermore, TMIL divides histopathological images into pseudo-bags and trains patch-level feature vectors with deep metric learning to enhance classification performance. Finally, the proposed approach is evaluated on a public colorectal adenoma dataset. The experimental results show that TMIL outperforms existing MIL methods, achieving an AUC of 97.28% and an ACC of 95.19%. These findings suggest that TMIL’s integration of deep metric learning and positional encoding offers a promising approach for improving the efficiency and accuracy of pathology image analysis in cancer diagnosis.

Copyright © Massey University  |  DSpace software copyright © 2002-2025 LYRASIS

  • Contact Us
  • Copyright Take Down Request
  • Massey University Privacy Statement
  • Cookie settings