Journal Articles

Permanent URI for this collectionhttps://mro.massey.ac.nz/handle/10179/7915

Browse

Search Results

Now showing 1 - 3 of 3
  • Item
    Efficient Limb Range of Motion Analysis from a Monocular Camera for Edge Devices.
    (MDPI (Basel, Switzerland), 2025-01-22) Yan X; Zhang L; Liu B; Qu G; Amerini I; Russo P; Di Ciaccio F
    Traditional limb kinematic analysis relies on manual goniometer measurements. With computer vision advancements, integrating RGB cameras can minimize manual labor. Although deep learning-based cameras aim to offer the same ease as manual goniometers, previous approaches have prioritized accuracy over efficiency and cost on PC-based devices. Nevertheless, healthcare providers require a high-performance, low-cost, camera-based tool for assessing upper and lower limb range of motion (ROM). To address this, we propose a lightweight, fast, deep learning model to estimate a human pose and utilize predicted joints for limb ROM measurement. Furthermore, the proposed model is optimized for deployment on resource-constrained edge devices, balancing accuracy and the benefits of edge computing like cost-effectiveness and localized data processing. Our model uses a compact neural network architecture with 8-bit quantized parameters for enhanced memory efficiency and reduced latency. Evaluated on various upper and lower limb tasks, it runs 4.1 times faster and is 15.5 times smaller than a state-of-the-art model, achieving satisfactory ROM measurement accuracy and agreement with a goniometer. We also conduct an experiment on a Raspberry Pi, illustrating that the method can maintain accuracy while reducing equipment and energy costs. This result indicates the potential for deployment on other edge devices and provides the flexibility to adapt to various hardware environments, depending on diverse needs and resources.
  • Item
    Potential rapid intraoperative cancer diagnosis using dynamic full-field optical coherence tomography and deep learning: A prospective cohort study in breast cancer patients
    (Elsevier B V on behalf of the Science China Press, 2024-06-15) Zhang S; Yang B; Yang H; Zhao J; Zhang Y; Gao Y; Monteiro O; Zhang K; Liu B; Wang S
    An intraoperative diagnosis is critical for precise cancer surgery. However, traditional intraoperative assessments based on hematoxylin and eosin (H&E) histology, such as frozen section, are time-, resource-, and labor-intensive, and involve specimen-consuming concerns. Here, we report a near-real-time automated cancer diagnosis workflow for breast cancer that combines dynamic full-field optical coherence tomography (D-FFOCT), a label-free optical imaging method, and deep learning for bedside tumor diagnosis during surgery. To classify the benign and malignant breast tissues, we conducted a prospective cohort trial. In the modeling group (n = 182), D-FFOCT images were captured from April 26 to June 20, 2018, encompassing 48 benign lesions, 114 invasive ductal carcinoma (IDC), 10 invasive lobular carcinoma, 4 ductal carcinoma in situ (DCIS), and 6 rare tumors. Deep learning model was built up and fine-tuned in 10,357 D-FFOCT patches. Subsequently, from June 22 to August 17, 2018, independent tests (n = 42) were conducted on 10 benign lesions, 29 IDC, 1 DCIS, and 2 rare tumors. The model yielded excellent performance, with an accuracy of 97.62%, sensitivity of 96.88% and specificity of 100%; only one IDC was misclassified. Meanwhile, the acquisition of the D-FFOCT images was non-destructive and did not require any tissue preparation or staining procedures. In the simulated intraoperative margin evaluation procedure, the time required for our novel workflow (approximately 3 min) was significantly shorter than that required for traditional procedures (approximately 30 min). These findings indicate that the combination of D-FFOCT and deep learning algorithms can streamline intraoperative cancer diagnosis independently of traditional pathology laboratory procedures.
  • Item
    A multi-label classification model for full slice brain computerised tomography image
    (BioMed Central Ltd, 2020-11-18) Li J; Fu G; Chen Y; Li P; Liu B; Pei Y; Feng H
    BACKGROUND: Screening of the brain computerised tomography (CT) images is a primary method currently used for initial detection of patients with brain trauma or other conditions. In recent years, deep learning technique has shown remarkable advantages in the clinical practice. Researchers have attempted to use deep learning methods to detect brain diseases from CT images. Methods often used to detect diseases choose images with visible lesions from full-slice brain CT scans, which need to be labelled by doctors. This is an inaccurate method because doctors detect brain disease from a full sequence scan of CT images and one patient may have multiple concurrent conditions in practice. The method cannot take into account the dependencies between the slices and the causal relationships among various brain diseases. Moreover, labelling images slice by slice spends much time and expense. Detecting multiple diseases from full slice brain CT images is, therefore, an important research subject with practical implications. RESULTS: In this paper, we propose a model called the slice dependencies learning model (SDLM). It learns image features from a series of variable length brain CT images and slice dependencies between different slices in a set of images to predict abnormalities. The model is necessary to only label the disease reflected in the full-slice brain scan. We use the CQ500 dataset to evaluate our proposed model, which contains 1194 full sets of CT scans from a total of 491 subjects. Each set of data from one subject contains scans with one to eight different slice thicknesses and various diseases that are captured in a range of 30 to 396 slices in a set. The evaluation results present that the precision is 67.57%, the recall is 61.04%, the F1 score is 0.6412, and the areas under the receiver operating characteristic curves (AUCs) is 0.8934. CONCLUSION: The proposed model is a new architecture that uses a full-slice brain CT scan for multi-label classification, unlike the traditional methods which only classify the brain images at the slice level. It has great potential for application to multi-label detection problems, especially with regard to the brain CT images.