Massey Documents by Type
Permanent URI for this communityhttps://mro.massey.ac.nz/handle/10179/294
Browse
3 results
Search Results
Item Adopting augmented reality to avoid underground utilities strikes during excavation : a thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy, School of Built Environment, College of Science, Massey University, New Zealand(Massey University, 2025) Khorrami Shad, HesamThe construction industry constantly pursues innovative methods to improve safety, enhance productivity, and reduce costs and project durations. Augmented Reality (AR) is a promising technology, potentially bringing about transformative changes in construction. AR is a promising technology for visualizing data in construction sites and preventing clashes and accidents. One of its promising applications is in the excavation sector, where accidental strikes on underground utilities pose serious safety risks, delays, and costly damages. However, while AR has gained increasing attention in recent years, its integration into construction practice remains limited. To address this limitation, this research investigates the potential of AR to facilitate identifying underground utility locations through a systematic review, industry engagement, and user-centred experimentation. Initially, a systematic literature review was conducted to explore the current applications of AR in construction safety. This review identified the safety purposes of AR across three project phases: pre-event (e.g., training, safety inspections, hazard alerting, enhanced visualization), during-event (e.g., pinpointing hazards), and post-event (e.g., safety estimation). However, the review also revealed a notable lack of studies focused on AR applications in excavation activities, particularly for underground utility strike prevention. In response, a study was undertaken to understand the needs, expectations, and challenges associated with adopting AR in the excavation sector. 31 professionals from the excavation industry participated in the within-subject experiment, interacting with two AR prototypes, delivered via Optical See-Through (OST) and Video See-Through (VST) devices. The findings indicated a clear preference for AR over traditional methods such as paper-based drawings. Participants showed a preference for VST rather than OST, given their familiarity with VST devices such as tablets. Further, accessibility emerged as the primary barrier to adopting AR within the excavation industry. Building on the literature and industry insights, an experimental study was designed to evaluate the effectiveness of different AR visualization methods in underground utility detection. A within-subject experiment involving 60 participants was conducted to compare four of the most cited visualization techniques for underground utilities: X-Ray, Shadow, Cross-Sectional, and a newly developed Combination method. Drawing on the Theory of Affordances and Task Load analysis, the study found that the Combination and X-Ray visualization methods perform superior to the Shadow. These results provide empirical support for the user-centered design of AR visualization techniques in excavation practice. This research contributes to the fields of human-computer interaction, construction safety, and digital technology adoption by advancing the use of AR for underground utility strike prevention. The study shifts the focus of AR from general safety training to real-time, spatial visualization for excavation, offering both theoretical insights and practical applications. Methodologically, it follows a structured mixed-methods approach, combining literature review, industry engagement, and experimental testing. Practically, it identifies user preferences, visualization methods, and key adoption factors such as usability and accessibility. Overall, this thesis fills the gap between emerging AR technologies and their integration into safer excavation practices.Item Grape yield analysis with 3D cameras and ultrasonic phased arrays : a thesis by publications presented in fulfillment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Albany, New Zealand(Massey University, 2024-01-18) Parr, BadenAccurate and timely estimation of vineyard yield is crucial for the profitability of vineyards. It enables better management of vineyard logistics, precise application of inputs, and optimization of grape quality at harvest for higher returns. However, the traditional manual process of yield estimation is prone to errors and subjectivity. Additionally, the financial burden of this manual process often leads to inadequate sampling, potentially resulting in sub-optimal insights for vineyard management. As such, there is a growing interest in automating yield estimation using computer vision techniques and novel applications of technologies such as ultrasound. Computer vision has seen significant use in viticulture. Current state-of-the-art 2D approaches, powered by advanced object detection models, can accurately identify grape bunches and individual grapes. However, these methods are limited by the physical constraints of the vineyard environment. Challenges such as occlusions caused by foliage, estimating the hidden parts of grape bunches, and determining berry sizes and distributions still lack clear solutions. Capturing 3D information about the spatial size and position of grape berries has been presented as the next step towards addressing these issues. By using 3D information, the size of individual grapes can be estimated, the surface curvature of berries can be used as identifying features, and the position of grape bunches with respect to occlusions can be used to compute alternative perspectives or estimate occlusion ratios. Researchers have demonstrated some of this value with 3D information captured through traditional means, such as photogrammetry and lab-based laser scanners. However, these face challenges in real-world environments due to processing time and cost. Efficiently capturing 3D information is a rapidly evolving field, with recent advancements in real-time 3D camera technologies being a significant driver. This thesis presents a comprehensive analysis of the performance of available 3D camera technologies for grape yield estimation. Of the technologies tested, we determined that individual berries and concave details between neighbouring grapes were better represented by time-of-flight based technologies. Furthermore, they worked well regardless of ambient lighting conditions, including direct sunlight. However, distortions of individual grapes were observed in both ToF and LiDAR 3D scans. This is due to subsurface scattering of the emitted light entering the grapes before returning, changing the propagation time and by extension the measured distance. We exploit these distortions as unique features and present a novel solution, working in synergy with state-of-the-art 2D object detection, to find and reconstruct in 3D, grape bunches scanned in the field by a modern smartphone. An R2 value of 0.946 and an average precision of 0.970 was achieved when comparing our result to manual counts. Furthermore, our novel size estimation algorithm was able accurately to estimate berry sizes when manually compared to matching colour images. This work represents a novel and objective yield estimation tool that can be used on modern smartphones equipped with 3D cameras. Occlusion of grape bunches due to foliage remains a challenge for automating grape yield estimation using computer vision. It is not always practical or possible to move or trim foliage prior to image capture. To this end, research has started investigating alternative techniques to see through foliage-based occlusions. This thesis introduces a novel ultrasonic-based approach that is able to volumetrically visualise grape bunches directly occluded by foliage. It is achieved through the use of a highly directional ultrasonic phased array and novel signal processing techniques to produce 3D convex hulls of foliage and grape bunches. We utilise a novel approach of agitating the foliage to enable spatial variance filtering to remove leaves and highlight specific volumes that may belong to grape bunches. This technique has wide-reaching potential, in viticulture and beyond.Item Estimating grapevine water status using hyperspectral and multispectral remote sensing and machine learning algorithms : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Agriculture and Horticulture at Massey University, Manawatū, New Zealand(Massey University, 2023) Wei, Hsiang-EnModerate water deficit is desirable for achieving the optimal grape composition which determines the values of wine, especially for red cultivars. To attain consistency in grape quality in vineyards, it is critical to manage grapevine water status (GWS) to the target range, but to avoid severe dehydration, between fruit set and veraison. Together with the foreseeable climate changes and stricter environmental regulations, there is a need for viticulture to estimate GWS variability across fields along growing seasons before irrigating to eliminate the uncertainty of controlling hydration status and to produce consistent grapes with premium quality. Precision viticulture (PV) recognizes that not all areas within a vineyard are uniform in terms of their soil, climate, and other environmental conditions. Therefore, it tailors viticultural management to the unique needs of different vineyard zones by focusing on applying site-specific or time-specific management practices. PV aims to enhance grape quality and yield while minimizing resource usage and environmental impacts. As the final product (wine) for viticulture has the potential of high additional values, it is worth considering the application of PV in decision-making according to the information on spatio-temporal variability across the fields. The advancement and availability of remotely sensed spectral information, geospatial technologies, and machine learning models have opened a new chapter for spatio-temporal GWS monitoring. However, there are technical shortcomings that need to be addressed before extensive application and adoption of these techniques in viticulture. These include a lack of understanding of GWS-related spectral data analysis methods, a lack of data interoperability for GWS estimation between data sourced from various devices with different formats, and a lack of availability of high-coverage images with high spatial and temporal resolution. Therefore, this study tackled the technical bottlenecks, related to the application of proximal sensing and remote sensing (RS) in GWS estimation, from three perspectives: (i) the exploration of relevant spectral regions over the electromagnetic spectrum, (ii) the complementation from differently sourced datasets other than RS information, (iii) the provision of large-scale GWS prediction. This study was undertaken in two Pinot Noir vineyards trained with vertical shoot positioning for two growing seasons in Martinborough, New Zealand. The investigation window, corresponding to the critical periods of GWS management, between fruit set and veraison in each growing season, includes November, December, January, and February. Stem water potential (Ψstem), serving as a proxy for GWS, is measured on 85 and 63 canopies in the first and second growing seasons. Each sampled grapevine is recorded for its location with a global navigation satellite system with real-time kinematic correction. Five times field data collection, including measuring hyperspectral point data using ASD FieldSpec 4 Spectroradiometer (proximal sensing data) and taking multispectral images using DJI Phantom 4 UAV (remote sensing data), were carried out in each growing season. An electromagnetic induction survey was implemented by using EM38-MK2 to acquire apparent electrical conductivity (ECa) maps (complementary data). Several satellite images collected by PlanetScope (remote sensing data) during the study periods and the LiDAR-based digital elevation model (complementary data) were downloaded and added to the analysis datasets. An on-site weather station continuously records and provides meteorological information, including air temperature (°C), relative humidity (%), rainfall (mm), wind speed (km/h), and irradiance (W/m2) (complementary data). The identification of the relationships between spectral information and Ψstem is an essential step for the robust application. By analyzing hyperspectral spectra, it shows that the statistically relevant wavelengths disperse across visible, near-infrared, and shortwave infrared (SWIR) spectral bands. They are specifically located around blue, red, and red edge bands, two weak water absorption bands at 970 and 1200 nm, two strong absorption bands at 1400 and 1940 nm, and some dry matter-related bands. When analyzing multispectral images taken by UAV, it shows Transformed Chlorophyll Absorption Reflectance Index and Excess Green Index are the multispectral indices mostly correlated (R2 = 0.35 and 0.3, respectively) with the changes in Ψstem. It implies that the variation in leaf pigments, especially chlorophylls, is better for describing the Ψstem variation of Pinot Noir than the alteration in canopy structure. When applying the Ψstem-sensitive spectral bands through airborne or spaceborne platforms, the missing SWIR for most commercial multispectral sensors and the presence of vapor in the air obstruct the usefulness of the Ψstem-sensitive spectral bands. Therefore, this study assesses the complementary effects provided by other environmental aspects, including soil/ terrain, vegetation, temporal, and weather variables, to improve the GWS estimating capabilities of aerial multispectral sensors. The results prove the complementary effects by displaying that the detection accuracy is improved from RMSE of 213 to 146 kPa and RMSE of 221 kPa to 138 kPa. To monitor the fields at a large-scale using multispectral satellites, it is common to encounter several technical issues: coarse resolution pixels that contain background information, weather dependence, and delay in the image delivery. This study addresses the limitation of coarse spatial resolution using two-stage calibration to scale information provided by ground measurement up to satellite images, along with removing interference from the inter-row components. It demonstrates that satellite images can approximate the collected Ψstem with high accuracy (RMSE = 59 kPa). To deal with the contamination by weather and delivering delay of image products, a prediction model is established based on the calibrated satellite images and various environmental variables (day of the year, rainfall, potential evapotranspiration, irrigation, fertigation, plucking, trimming, normalized difference vegetation index, ECa, elevation, and slope). The developed model is able to predict Ψstem trend in an independent growing season with high consistency when compared with the reference (r = 0.89 and 0.87 for the two vineyards, respectively).
