Massey Documents by Type
Permanent URI for this communityhttps://mro.massey.ac.nz/handle/10179/294
Browse
160 results
Search Results
Item Prioritising indicators of success in 'Build Back Better' post-disaster frameworks : a thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Construction at Massey University, Albany, New Zealand(Massey University, 2025-07-30) Hubbard, FrancisThis study explores the challenges and significance of indicator selection for key decision-makers in post-disaster response, recovery, and reconstruction efforts. When a community is overwhelmed in the aftermath of a disaster - various entities, including aid organisations, local authorities, and national agencies, are mobilised to provide emergency response and support in the subsequent response and recovery phases. These decision-makers rely on choosing appropriate indicators to evaluate the effectiveness of their interventions, track progress, and decide on appropriate actions and activities. Guided by the principle of "Build Back Better," which advocates for a comprehensive and holistic approach to resilience, practitioners need to comprehend the intricate relationships and dependencies among indicators to make informed decisions regarding their selection. This aspect has been identified as a significant weakness in the implementation process for all stakeholders. Employing a novel methodology, this thesis utilises the Hierarchical Decomposition Algorithm to analyse the priority of and the relationship between indicators proposed by the 2016 ‘Build Back Better Framework’, a synthesised framework reflecting a unified approach in disaster management. Empirical evidence from forty case studies examining key decision makers experiences of implementing disaster response efforts validates these findings. The study concludes with a rational process and workflow for determining indicator selection which considers the diverse nature of response and recovery in the pursuit to effectively build back better.Item Mathematical modelling of airflow during forced draft precooling operations : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Food Technology at Massey University, Palmerston North, New Zealand(Massey University, 2024-11-01) Tapia Zapata, Nicolas IgnacioBy the year 2020, the kiwifruit industry represents approximately 37 % of the horticultural export industry sector in New Zealand. Thereof, the kiwifruit cold chain aim is to reduce losses due to poor temperature control and energy usage during refrigeration. By forced convection aided by fans in palletised kiwifruit, field heat is removed rapidly prior to storage, thus optimising shelflife of the produce. Previous Computer Fluid Dynamics (CFD) model determined the optimal operating point for palletised kiwifruit during forced-draft cooling. However, CFD requires complex simulation, in detriment to computational efficiency and solving time. Therefore, there is an imperative to provide innovative tools that optimise package design by iterating several designs and that is applicable to the local industry sector for cold chain optimisation. In this spirit, this projects aimed to development of a simplified approach for the prediction of airflow distribution of palletised kiwifruit during forced-draft cooling, that can be coupled with an alternative heat transfer model, thus providing a fast and robust package optimisation routine that can inform cooling performance of several package design and pallet configuration.Item Model-based packaging design for minimising environmental impact of horticultural packaging systems : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Food Technology at Massey University, New Zealand. EMBARGOED until 13 November 2026.(Massey University , 2024) Lozano, RaquelPackaging systems are instrumental in delivering high-quality food products to consumers. Food industries grapple with losses throughout the supply chain, resulting in both product and monetary setbacks. When considering the embodied resources in food production, including raw materials, energy, water, and emissions, minimising losses in any stage of the food supply chain is crucial. The New Zealand kiwifruit industry faces several constraints which include short harvest seasons, considerable distance to markets and year-round consumer demand. Packaging and storage plays a role in overcoming these factors by preventing undesirable quality loss traits. Establishing the link between packaging systems, supply chain conditions, and kiwifruit quality (specifically shrivel) provides a basis for evaluating the trade-off between over-packaging and excessing fruit loss. In this thesis, an integrated-mathematical model was developed to aid decision-making in for kiwifruit packaging, aiming to minimise the overall environmental impact throughout the kiwifruit supply chains from packhouse to purchase. This integrated-mathematical model facilitates exploratory analysis of both current and future supply chains and packaging systems. Four models were integrated: mass balance, moisture loss prediction, shrivel loss prediction and an optimisation engine. The mass balance model captured the kiwifruit and packaging masses and associated environmental impacts within kiwifruit supply chains. This model, applicable to any environmental metric, was developed to facilitate the prediction of kiwifruit losses. To validate its accuracy, the framework was applied in assessment examples, comparing its performance against existing research for kiwifruit supply chains. The absolute difference between predicted and actual emissions of CO2eq were less than 1% of the actual mean emissions at different stages of the supply chain. The moisture loss model was used to estimate kiwifruit weight loss both on a packaging unit and individual kiwifruit basis. The model demonstrated close agreement between weight loss predictions and experimental data for average packaging weight loss scenarios. Further refinement is needed to predict individual kiwifruit weight loss, specifically considering the impacts of packaging features on internal packaging water vapour distributions. The shrivel prediction model revealed that predicting kiwifruit losses due to shrivel posed challenges, primarily due to the current knowledge gap regarding the development of shrivel in kiwifruit under storage conditions. While increases in shrivel has been correlated to weight loss in existing literature, the reference state (at orchard, packhouse etc.) is arbitrary. Ideally shrivel would be related to an intrinsic property that could be measured at any point in time without requiring knowledge of this prior history of the fruit. The prediction of losses based on a non-relative starting point represents a knowledge gap addressed in this work, with potential improvements identified for future model iterations. This phase of the model development heavily relied on data collection to establish a mathematical relationship between weight loss and shrivel. The moisture loss and shrivel model served as the foundation for the development of an optimisation engine, enabling the identification of the optimal use of packaging. This model sought a balance between packaging mass and kiwifruit losses, employing various environmental impact categories as performance metrics. The success of this approach was evident as optimal packaging points were identified across (i) different packaging materials, (ii) different packaging materials and formats and (iii) different environmental impact categories. It was found that each optimum point for materials were unique to the ambient conditions of the supply chain, packaging format and material. This work revealed trade-offs between the environmental impact of the packaging material and amount of kiwifruit loss, numerically demonstrating what so far has only been presented as a theoretical concept in other research. Then, this integrated-model was applied to a range of real-life supply chain scenarios showcasing its versatility in addressing possible questions such as ‘what if ?’, ‘can we ?’ and ‘when can we ?. The application of the model to real-life scenarios demonstrated its utility for decision-making with respect to packaging materials and formats. This model is poised to offer crucial support for future packaging materials and supply chains. The limitation of this model lies in fruit loss predictions. To further model applicability, there remains further investigation of hypotheses developed during shrivel model development to refine the kiwifruit loss model. There also remains the opportunity to integrate more prediction models that account for the impact of packaging on other drivers of fruit loss, such as ethylene concentrations within the pack. While the integrated model developed in this thesis has some limitations in accurately predicting kiwifruit losses, this study highlights the significance of linking packaging performance and kiwifruit quality when evaluating environmental impacts. Although kiwifruit served as the focus in this work, the model created here paves the way for exploring the application of optimised packaging systems for other food commodities.Item Novel circuit design for CMOS analog beamformer chip : Doctor of Philosophy in Electronics Engineering at Massey University, Auckland. EMBARGOED until 10 September 2027.(Massey University, 2023-09-21) NainaThe concept of Beamformer has been around since 1906. They were originally studied for directive communication but later due to their several properties such as beam steering, multiple beams scanning they were incorporated with radar systems during World War II. For future 5G communication, in order to achieve high speed and high bandwidth frequency bands in GHz range are being analysed and beamformers are proving themselves very useful by providing high transmitting power and high signal to noise ratio at mm-wave frequencies. At high frequencies as the antenna size reduces and it becomes possible to put multiple antennas in small space for beamforming. Moreover, as the number of elements is increased, the overall gain of the system increases which compensates for the path loss present at mm-wave frequencies. But increase in number of the elements increases the overall cost and power consumption of the beamformer. There are several architectures present in the literature to design the beamformer. But power consumption, and cost of the system is still an issue for beamformer design for applications where high cost is an issue. This research work is focused on reducing the overall cost of a beamformer. The cost in integrated circuit (IC) design refers to DC power, silicon area and external control required to use the IC. Phase shifter is the vital part of a beamformer which provides the relative phase shift between each element of a beamformer in order to create a beam and then steer it in desired direction. The cost and size of the phase shifter affects the overall cost of the beamformer as it takes around 40% of the total cost. In this work two new active analog phase shifters are proposed. The proposed phase shifters are based on cartesian combining method where in-phase and quadrature signals are scaled and added together so that a desired phase can be achieved at the output. The first phase shifter is proposed for the frequency band of 25 GHz – 30 GHz. The proposed phase shifter incorporates a closed loop error tuning circuit in the in-phase quadrature signal generator. This approach reduces the I/Q amplitude and phase imbalance of the I/Q generator below 0.3 dB and 1.1° respectively for a wide bandwidth of 5 GHz. Moreover, the active phase shifter consumes only 22.8 mV from a 1V supply. The second phase shifter incorporates a novel variable gain amplifier (VGA) which improves the overall input referred 1-dB compression point (IP1dB) of the phase shifter while maintaining comparably low DC power consumption. The proposed VGA is designed using folded topology with dual differential architecture. The resulting phase shifter has the lowest IP1dB of +2.5 dBm at the center frequency of 8 GHz which is very high compared to most of the phase shifters present in the literature. The designed phase shifter consumes 18.9 mW core power (I/Q generator, VGA) from 1.8 V supply. Limited scan beamformer is selected as cost-effective approach to design a beamformer in this work. Limited scan beamformer reduces the overall cost by reducing the number of variable phase shifters used in a beamformer while achieving an optimum scanning range. There are three main limited scan beamformer architectures (OSA, NOSA and feed network with single phase shifter) are studied in detail and their radiation pattern is plotted for comparative analysis. Next a novel 7-channel active beamformer transmitter is proposed with a novel feed network. The novel feed network allows to use only three variable phase shifters for a 7- channel beamformer at a center frequency of 9 GHz. This reduces the complexity and the overall cost of the beamformer. The beamformer is designed using TSMC 180nm CMOS process. The proposed beamformer consumes a total of 523.71 mW of the power which equates to 74.8 mW per channel. A MATLAB code is used to plot the radiation pattern of the beamformer for different scan angles. The proposed beamformer has a scanning range of around ±19° which is very high compared to most of the limited scan beamformers in the literature while maintaining a side lobe level (SLL) of -7dB and below. The phase shifters and beamformer proposed in this work have simple analog architectures and control circuitry.Item Real-time measurement of fill volume in a vessel using optical and acoustical means : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Manawatū, New Zealand. EMBARGOED until further notice.(Massey University, 2024) Barzegar, Mohammad AminThis thesis investigates optical and acoustical methods for quickly determining the fill volume in cavities, vessels, or hoppers. The motivation for this study was the demand in the New Zealand aerial topdressing industry for a system that can accurately track the fill volume of vessels containing powders. A notable challenge in this industry is that topdressing aircraft lack systems for measuring the volume of discharge vessels during flight, leading to issues with flight safety and operational efficiency. This thesis specifically addresses the challenge of real-time fill volume determination in hoppers within New Zealand's aerial topdressing industry. Additionally, the outcomes of this thesis may offer insights and applicable methods for other industrial and scientific sectors that require real-time, contactless volume determination techniques. Three contactless volume measurement approaches were investigated: ultrasonic range-finding, 3D scanning, and acoustical resonance. The first approach used an array of ultrasonic rangefinders installed in a 200-litre powder-containing vessel, resulting in material level readings from multiple points. This technique was tested under discharge and no-flow conditions. According to the results, this method provided readings of the vessel fill volume with a measurement rate of ~1 Hz and an uncertainty of ~3% of the vessel capacity. The second approach used stereoscopic technique to provide real-time scans of the material surface in the vessel. A model was developed for calculating the volume of material in the vessel using the vessel internal scans. According to the test results on different bulk materials under discharge and no flow conditions for two vessels of sizes 50 and 200 litres, the real-time fill volume of the vessel was obtained with uncertainties less than 1% of the vessel’s volume. The third approach explored Helmholtz Resonance for determining the volume of powders and solids. This involved studying the impact of inserting a sample into Helmholtz Resonators on resonance parameters. Three models were developed for volume estimation: an extended Helmholtz Resonance model modifying the classical equation for resonators with long ports, a model for estimating solid volume in powders based on resonance frequency and quality factor, and a model for instantaneous volume measurement of a vessel's empty cavity using Helmholtz Resonance. The latter correlated the change in cavity sound pressure to its volume, showing it could accurately determine volume in real-time with less than 0.1% error relative to the vessel capacity.Item Essays on financial risk modelling : a thesis presented in fulfilment of the requirements for the degree of Doctor of Philosophy in Finance, School of Economics and Finance, Massey University, Auckland, New Zealand(Massey University, 2024-08-08) Nguyen, Thao Thac ThanhIn today’s highly interconnected financial landscape, the risk of shock spillover is a critical factor contributing to increased systemic risk and impacting the global financial stability. Research on spillover effects has gained significant attention from both academics and practitioners. This thesis aims to contribute to this strand of the literature by conducting three studies that employ a variety of connectedness methods to investigate several underexplored issues within the field of financial risk management. These essays delve further into the evolution of these spillover effects during times of extreme financial uncertainty and aim to identify the key drivers of these spillovers. Essay One investigates the high-frequency spillover of volatility shocks across major oil-dependent foreign exchange markets, considering the impact of the oil market’s volatility regime. Essay Two examines the return shock spillover between European sectoral credit default swap and the natural gas markets. This investigation is conducted across different quantiles of return distributions, with a special focus on understanding the effects of the ongoing Russian-Ukrainian war on this spillover. Essay Three scrutinizes spillover effects of inflation shocks under normal economic conditions and extreme inflationary conditions between the U.S. and emerging markets. The essay further unveils the determinants of the inflation spillovers among the sample markets.Item Risk identification and allocation in public-private partnerships : a New Zealand perspective : a thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Construction Management, Massey University, Auckland, New Zealand(Massey University, 2024-06-13) Rasheed, NasirPublic-Private Partnerships (PPPs) have become a prevalent solution for funding infrastructure projects amid declining public reserves. Despite their widespread adoption, not all PPP projects prove successful, often due to inadequate risk management. Recognizing the expertise of the private sector, including the New Zealand government, PPPs are increasingly utilized. However, there is a scarcity of specific research on PPPs in the local context, particularly in social infrastructure using the Design-Build-Finance-Maintain-Operate (DBFMO) delivery method. This thesis aims to address this gap by establishing a framework for improving risk management outcomes in New Zealand's PPP infrastructure projects, focusing on critical success factors, empirical investigations into risk identification, and the development and validation of a Fuzzy based risk allocation model to inform stakeholders' decisions. This research employed two distinct questionnaire surveys targeting industry experts from both public and private sectors, all possessing relevant experience in the local industry and PPP procurement. Given the absence of a precise population, a combination of convenience and judgment sampling, was utilized. A total of 43 and 58 PPP experts provided valid responses to two questionnaires. The sample size was considered appropriate, especially considering the relatively recent adoption of PPP in New Zealand. Additionally, comparisons were made with similar studies that employed questionnaire surveys to ensure validity. The collected data underwent various statistical analyses, including mean score analysis, Cronbach's alpha reliability analysis, independent sample t-test, and factor analysis. Subsequently, the fuzzy synthetic evaluation (FSE) method was applied to model risk allocation. In addition, the study included a set of semi-structured interviews to provide a practical and policy-making context for the research. Critical success factor rankings established through mean scores revealed approval and negotiation process, innovation and project complexity, client’s brief, project’s technical feasibility and strong private consortium to be five top ranked factors out of the 27 identified. Similarly, the top three risk allocation criteria (RAC) having very high importance (mean score greater than 4 on a scale of 5) were risk foresight, response to risk and minimise risk loss. Furthermore, factor analysis showed that the 9 identified RAC can be classified into three component groups namely risk management expertise, core risk management capability and risk management strategy. Recognizing the importance of principle of risk allocation, the proposed fuzzy based risk allocation model took into account the risk management capability of public and private sector. FSE was chosen for its adeptness in handling intricate multi-faceted challenges, particularly in the context of risk distribution decisions that involve the inherent vagueness within human cognitive processes. Due to their contentious nature in literature and different PPP projects, 16 risk were carefully chosen to be allocated via the model from a list of 35 risks initially identified. The findings indicate that for 12 of the risks, the distribution proportions between the government and the private sector are comparable. Risks associated with land acquisition and public opposition are predominantly assigned to the public sector, while risks linked to unforeseen geotechnical conditions and financing are predominantly allotted to the private sector. The results were validated using six interviews with highly experienced professionals within the New Zealand PPP scene. The outcomes of this study are anticipated to guide policymakers in formulating effective strategies for assigning risks and devising well-balanced risk sharing arrangements within PPP contracts, to achieve outcomes mutually agreeable to both the public and private sectors, ultimately enhancing the uptake of PPP projects.Item Effects of multiphase turbulence on the flow and hazard behavior of dilute pyroclastic density currents : a thesis submitted in partial fulfilment for the degree of Doctor of Philosophy in Earth Science at Massey University, Palmerston North, New Zealand(Massey University, 2024-06-16) Uhle, Daniel HolgerPyroclastic surges (also dilute pyroclastic density currents or dilute PDCs) are amongst the most hazardous volcanic phenomena associated with explosive volcanic eruptions and hydrothermal explosions. These fast-moving, turbulent, polydisperse multiphase flows of hot volcanic particles and gas occur frequently and have severe impacts on life and infrastructure. This is attributed to a compounding of hazard effects: large flow-internal dynamic pressures of tens to hundreds of kilopascals destroy reinforced buildings and forests; temperatures of up to several hundreds of degrees Celsius pose severe burn hazards; and readily respirable hot fine ash particles suspended inside dilute PDCs cause rapid asphyxiation. Direct measurements inside pyroclastic density currents are largely absent, and previous research has used a combination of detailed field studies on PDC deposits, laboratory experiments on analog density currents, numerical modeling, and theoretical work to interrogate the internal flow structure, gas-particle transport, sedimentation and destructiveness of dilute PDCs. Despite major scientific advances over the last two decades, significant fundamental gaps in understanding the turbulent multiphase flow behavior of dilute PDCs endure, preventing the development of robust volcanic hazard models that can be deployed confidently. Critical unknowns remain regarding: (i) how turbulence is generated in dilute PDCs; (ii) how multiphase processes modify the flow and turbulence structure of dilute PDCs; and (iii) if and how turbulent gas-particle feedback mechanisms affect their destructiveness. To address these gaps in understanding, this PhD research involved high-resolution measurements of velocity, dynamic pressure, particle concentration, and temperature inside large-scale experimental dilute PDCs. It is shown that dilute PDCs are characterized by a wide turbulence spectrum of damage-causing dynamic pressure. This spectrum is strongly skewed towards large dynamic pressures with peak pressures that exceed bulk flow values, routinely used for hazard assessments, by one order of magnitude. To prevent severe underestimation of the damage potential of dilute PDCs, the experimentally determined ratio of turbulence-enforced pressure maxima and routinely estimated bulk pressures should be used as a safety factor in hazard assessments. High-resolution measurements of dynamic pressure and Eulerian-Lagrangian multiphase simulations reveal that these pressure maxima are attributed to the clustering of particles with critical particle Stokes numbers (𝑆𝑡=𝒪(1)) at the margins of coherent turbulence structures. The characteristic length scale and frequency of coherent structures modified in this way are controlled by the availability of the largest particles with critical Stokes number. Through this, spatiotemporal variations in peak pressures are governed by the mass loading and subsequent sedimentation of these clustered particles. In addition to the ‘continuum phase’ loading pressure, the measurements also revealed that the direct impact of clustered margins and high Stokes number particles decoupling from margins with structures generate instantaneous impacts. These piercing-like impact pressures exceed bulk pressure values by two orders of magnitude. Particle impact pressures can cause severe injuries and damage structures. They can be identified as pockmarks on buildings and trees after eruptions. This new type of PDC hazard and the magnitude of pressure impacts need to be accounted for in hazard assessments. Systematic measurements of the evolving experimental pyroclastic surges along the flow runout demonstrate that time-averaged vertical profiles of all flow velocity components and flow density obey self-similar distributions. Variations of the roughness of the lower flow boundary, geometrically scaling ash- to boulder-sized natural substrates, showed the self-similar distributions are independent of the roughness. Mathematical relationships developed from the self-similar velocity and density distribution reveal the self-similar vertical distribution of mean dynamic pressure. This empirical model can inform multi-layer PDC models and estimate the height and values of peak time-averaged dynamic pressure for dilute PDCs of arbitrary scale. Turbulence fluctuations around the mean were investigated through Reynolds decomposition. The large-scale turbulence structure and the dominant source of turbulence generation are shown to be controlled by free shear with the outer flow boundary, while strong density gradients at the basal high-shear flow boundary dampen turbulence generation. The large-scale, shear-induced coherent turbulence structures can be tracked along the runout and were found to be superimposed by smaller turbulence structures. In Fourier spectra of dynamic pressure, flow velocity, and temperature, these sub-structures are observed as discrete frequency bands that correspond to the coarse modes of the spatiotemporally evolving flow grain-size distributions. This can be associated with the preferential clustering of particles at the peripheries of the sub-structures. Following the decoupling of particle clusters, the rapid sedimentation of particle clusters occurs periodically at the characteristic frequency of the turbulence sub-structures. This mechanism of preferential clustering, decoupling and rapid sedimentation of particles with critical particle Stokes numbers is an important mechanism of turbulent sedimentation to explain the spatiotemporally evolving flow grain-size distribution of pyroclastic surges.Item D2D communication based disaster response system under 5G networks : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy (PhD) in Computer and Electronics Engineering, Massey University, Auckland, New Zealand(Massey University, 2023-12-14) Ahmed, ShakilMany recent natural disasters such as tsunamis, hurricanes, volcanoes, earthquakes, etc. have led to the loss of billions of dollars, resources and human lives. These catastrophic disasters have attracted the researchers’ attention onto the significant damage to communication infrastructure. Further, communication within the first 72 hours after a disaster is critical to get help from rescuers. The advancement of wireless communication technologies, especially mobile devices and technologies, could help improve emergency communication systems. The next generation of mobile networks and technologies such as Device to Device (D2D) communication, the Internet of Things (IoT), Blockchain, and Big Data, can play significant roles in overcoming the drawbacks of the current disaster management system for data analysis and decision making. Next-generation cellular 5G and 6G network will provide several complex services for mobile phones and other communication devices. To integrate those services, the 5G cellular network will have the capabilities to handle the significant volume of data rate and the capacity to handle traffic congestion compared with the 4G or 3G cellular network. D2D communication technology, one of the major technologies in the 5G network, has the capability to exchange a high volume of traffic data directly between User Equipment (UE) without additional control from the Base Station(BS). D2D communication is used with other cell tiers in the 5G heterogeneous network (HetNet). Thus, the devices can form a cluster and cooperate with each other. As a result, the system tremendously increases network capacity as devices inside the cluster reuse the same spectrum or use an unlicensed spectrum. It will help to reduce the network’s traffic load and achieve significant throughput. D2D communication also has the ability to increase area spectral efficiency, reduce device power consumption, outage probabilities and improve network coverage. All of these characteristics are vital parameters for public safety and emergency communication applications. IoT paradigm is another promising technology with exciting features such as heterogeneity, interoperability, and flexibility. IoT has the capability to handle vast amounts of data. This huge amount of data creates Data security and data storage problems. Though, there are many technologies used to overcome the problem of validating data authenticity and data storage. Out of them, the Blockchain system is one of the emerging technologies which provides intrinsic data security. In addition, Big data technology provides data storage, modification, process, visualisation and representation in an efficient and easily understandable format. This feature is essential for disaster applications because it requires quickly collecting and processing vast amounts of data for a prompt response. Therefore, the main focus of this research work is exploring and utilising these emerging technologies (D2D, IoT, Big Data and Blockchain) and validating them with mathematical modelling for developing a disaster response system. This thesis proposes a disaster response framework by integrating the emerging technologies to overcome the problem of data communication, data security, data analysis and visualisation. Mathematical analysis and simulation models for multiple disaster sizes were developed based on D2D communication system. The result shows significant improvement in the disaster framework performance. The Quality of Services (QoS) is calculated for different scales of disaster impact. Approximately 40% disaster-affected people can get 5-10 dB and approximately 20% users get 20-25 dB Signal to Interference and Noise Ratio (SINR) when 70% infrastructure is damaged by a disaster. The network coverage increased by 25% and the network lifetime increased by 8%-14%. The research helps to develop a resilient disaster communication network which minimises the communication gap between the disaster-affected people and the rescue team. It identified the areas according to the needs of the disaster-affected people and offered a viable solution for the government and other stakeholders to visualize the disaster’s effect. This helps to make quick decisions and responses for pre and post-disaster.Item Theorising and testing the underpinnings of Lean Six Sigma : a thesis submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering, School of Food and Advanced Technology, Massey University, Manawatū, New Zealand(Massey University, 2023-12-07) Halnetti Perera, Achinthya PereraLean Six Sigma (LSS) is a widely used business process improvement method that combines Lean and Six Sigma. Despite its popularity and large volumes of research, the theoretical underpinnings of LSS remain underdeveloped. This thesis explores the theoretical foundations and practical implications of LSS, using an LSS project as the unit of analysis. Research objectives include: (i) identifying and operationalising the determinants of LSS, (ii) hypothesising the relationships between the determinants of LSS in predicting and explaining LSS project performance and testing the hypothesis empirically, (iii) assessing the impact of residual risks on LSS project performance, (iv) interpreting theoretical relationships from a practical perspective, and (v) testing whether LSS fits to nonmanufacturing as well as it would to manufacturing at a theoretical level. To achieve the objectives, a conceptual model was first framed by conducting a comprehensive literature review on available theories of SS/LSS and a novel approach (machine learning) to extract essential elements from the literature on critical success factors (CSFs). The conceptual model was then developed into a testable theoretical model through case research, which facilitated the operationalisation of the theoretical constructs. The overall hypothesis underpinning the theoretical model states, “Leadership engagement drives LSS Project Initiation and the Continuous Improvement Culture to execute an LSS project to yield the desired outcomes, but the Project Execution → Project Performance causal link would be moderated by the project residual risk”. Finally, the theory was empirically tested using partial least squares structural equation modelling based on data from 296 organisations worldwide. Although the data supported the overall hypothesis, some individual paths failed to support the model (p > 0.05). For example, project residual risk did not moderate the impact as anticipated, indicating that risk assessment is given significant attention during LSS project initiation. The total effect of Leadership Engagement on LSS Project Outcomes was 0.216 (p < 0.001), implying its practical importance (medium effect). The model fitted to nonmanufacturing equally well as manufacturing, supporting the hypothesis. Although case studies suggested that LSS projects are defined differently in manufacturing and nonmanufacturing and LSS structure differs from context to context, the model is robust enough to provide a solid theoretical foundation for LSS. The study adds to the current body of knowledge as a theory extension to the field of quality and operations management.

