Massey Documents by Type

Permanent URI for this communityhttps://mro.massey.ac.nz/handle/10179/294

Browse

Search Results

Now showing 1 - 3 of 3
  • Item
    Generative AI, Large Language Models, and ChatGPT in Construction Education, Training, and Practice
    (MDPI (Basel, Switzerland), 2025-03-15) Jelodar MB; Senouci A
    The rapid advancement of generative AI, large language models (LLMs), and ChatGPT presents transformative opportunities for the construction industry. This study investigates their integration across education, training, and professional practice to address skill gaps and inefficiencies. While AI’s potential in construction has been highlighted, limited attention has been given to synchronising academic curricula, workforce development, and industry practices. This research seeks to fill that gap by evaluating AI adoption through a mixed and multi-stage methodology, including theoretical conceptualisation, case studies, content analysis and application of strategic frameworks such as scenario planning, SWOT analysis, and PESTEL frameworks. The findings show AI tools enhance foundational learning and critical thinking in education but often fail to develop job-ready skills. Training programmes improve task-specific competencies with immersive simulations and predictive analytics but neglect strategic leadership skills. Professional practice benefits from AI-driven resource optimisation and collaboration tools but faces barriers like regulatory and interoperability challenges. By aligning theoretical education with practical training and strategic professional development, this research highlights the potential to create a future-ready workforce. The study provides actionable recommendations for integrating AI across domains. These findings contribute to understanding AI’s transformative role in construction, offering a baseline for effective and responsible adoption.
  • Item
    From Google Gemini to OpenAI Q* (Q-Star): A Survey on Reshaping the Generative Artificial Intelligence (AI) Research Landscape
    (MDPI (Basel, Switzerland), 2025-02-01) McIntosh TR; Susnjak T; Liu T; Watters P; Xu D; Liu D; Halgamuge MN; Mladenov V
    This comprehensive survey explored the evolving landscape of generative Artificial Intelligence (AI), with a specific focus on the recent technological breakthroughs and the gathering advancements toward possible Artificial General Intelligence (AGI). It critically examined the current state and future trajectory of generative AI, exploring how innovations in developing actionable and multimodal AI agents with the ability scale their “thinking” in solving complex reasoning tasks are reshaping research priorities and applications across various domains, while the survey also offers an impact analysis on the generative AI research taxonomy. This work has assessed the computational challenges, scalability, and real-world implications of these technologies while highlighting their potential in driving significant progress in fields like healthcare, finance, and education. Our study also addressed the emerging academic challenges posed by the proliferation of both AI-themed and AI-generated preprints, examining their impact on the peer-review process and scholarly communication. The study highlighted the importance of incorporating ethical and human-centric methods in AI development, ensuring alignment with societal norms and welfare, and outlined a strategy for future AI research that focuses on a balanced and conscientious use of generative AI as its capabilities continue to scale.
  • Item
    ChatGPT: The End of Online Exam Integrity?
    (MDPI (Basel, Switzerland), 2024-06-17) Susnjak T; McIntosh TR; Muijs D
    This study addresses the significant challenge posed by the use of Large Language Models (LLMs) such as ChatGPT on the integrity of online examinations, focusing on how these models can undermine academic honesty by demonstrating their latent and advanced reasoning capabilities. An iterative self-reflective strategy was developed for invoking critical thinking and higher-order reasoning in LLMs when responding to complex multimodal exam questions involving both visual and textual data. The proposed strategy was demonstrated and evaluated on real exam questions by subject experts and the performance of ChatGPT (GPT-4) with vision was estimated on an additional dataset of 600 text descriptions of multimodal exam questions. The results indicate that the proposed self-reflective strategy can invoke latent multi-hop reasoning capabilities within LLMs, effectively steering them towards correct answers by integrating critical thinking from each modality into the final response. Meanwhile, ChatGPT demonstrated considerable proficiency in being able to answer multimodal exam questions across 12 subjects. These findings challenge prior assertions about the limitations of LLMs in multimodal reasoning and emphasise the need for robust online exam security measures such as advanced proctoring systems and more sophisticated multimodal exam questions to mitigate potential academic misconduct enabled by AI technologies.