Journal Articles

Permanent URI for this collectionhttps://mro.massey.ac.nz/handle/10179/7915

Browse

Search Results

Now showing 1 - 4 of 4
  • Item
    Generative AI, Large Language Models, and ChatGPT in Construction Education, Training, and Practice
    (MDPI (Basel, Switzerland), 2025-03-15) Jelodar MB; Senouci A
    The rapid advancement of generative AI, large language models (LLMs), and ChatGPT presents transformative opportunities for the construction industry. This study investigates their integration across education, training, and professional practice to address skill gaps and inefficiencies. While AI’s potential in construction has been highlighted, limited attention has been given to synchronising academic curricula, workforce development, and industry practices. This research seeks to fill that gap by evaluating AI adoption through a mixed and multi-stage methodology, including theoretical conceptualisation, case studies, content analysis and application of strategic frameworks such as scenario planning, SWOT analysis, and PESTEL frameworks. The findings show AI tools enhance foundational learning and critical thinking in education but often fail to develop job-ready skills. Training programmes improve task-specific competencies with immersive simulations and predictive analytics but neglect strategic leadership skills. Professional practice benefits from AI-driven resource optimisation and collaboration tools but faces barriers like regulatory and interoperability challenges. By aligning theoretical education with practical training and strategic professional development, this research highlights the potential to create a future-ready workforce. The study provides actionable recommendations for integrating AI across domains. These findings contribute to understanding AI’s transformative role in construction, offering a baseline for effective and responsible adoption.
  • Item
    Can large language models help predict results from a complex behavioural science study?
    (The Royal Society, 2024-09) Lippert S; Dreber A; Johannesson M; Tierney W; Cyrus-Lai W; Uhlmann EL; Emotion Expression Collaboration; Pfeiffer T
    We tested whether large language models (LLMs) can help predict results from a complex behavioural science experiment. In study 1, we investigated the performance of the widely used LLMs GPT-3.5 and GPT-4 in forecasting the empirical findings of a large-scale experimental study of emotions, gender, and social perceptions. We found that GPT-4, but not GPT-3.5, matched the performance of a cohort of 119 human experts, with correlations of 0.89 (GPT-4), 0.07 (GPT-3.5) and 0.87 (human experts) between aggregated forecasts and realized effect sizes. In study 2, providing participants from a university subject pool the opportunity to query a GPT-4 powered chatbot significantly increased the accuracy of their forecasts. Results indicate promise for artificial intelligence (AI) to help anticipate-at scale and minimal cost-which claims about human behaviour will find empirical support and which ones will not. Our discussion focuses on avenues for human-AI collaboration in science.
  • Item
    Automating Systematic Literature Reviews with Retrieval-Augmented Generation: A Comprehensive Overview
    (MDPI (Basel, Switzerland), 2024-10-09) Han B; Susnjak T; Mathrani A; Garcia Villalba LJ
    This study examines Retrieval-Augmented Generation (RAG) in large language models (LLMs) and their significant application for undertaking systematic literature reviews (SLRs). RAG-based LLMs can potentially automate tasks like data extraction, summarization, and trend identification. However, while LLMs are exceptionally proficient in generating human-like text and interpreting complex linguistic nuances, their dependence on static, pre-trained knowledge can result in inaccuracies and hallucinations. RAG mitigates these limitations by integrating LLMs’ generative capabilities with the precision of real-time information retrieval. We review in detail the three key processes of the RAG framework—retrieval, augmentation, and generation. We then discuss applications of RAG-based LLMs to SLR automation and highlight future research topics, including integration of domain-specific LLMs, multimodal data processing and generation, and utilization of multiple retrieval sources. We propose a framework of RAG-based LLMs for automating SRLs, which covers four stages of SLR process: literature search, literature screening, data extraction, and information synthesis. Future research aims to optimize the interaction between LLM selection, training strategies, RAG techniques, and prompt engineering to implement the proposed framework, with particular emphasis on the retrieval of information from individual scientific papers and the integration of these data to produce outputs addressing various aspects such as current status, existing gaps, and emerging trends.
  • Item
    Chat2VIS: Generating Data Visualizations via Natural Language Using ChatGPT, Codex and GPT-3 Large Language Models
    (IEEE, 2023-05-08) Maddigan P; Susnjak T; Didimo W
    The field of data visualisation has long aimed to devise solutions for generating visualisations directly from natural language text. Research in Natural Language Interfaces (NLIs) has contributed towards the development of such techniques. However, the implementation of workable NLIs has always been challenging due to the inherent ambiguity of natural language, as well as in consequence of unclear and poorly written user queries which pose problems for existing language models in discerning user intent. Instead of pursuing the usual path of developing new iterations of language models, this study uniquely proposes leveraging the advancements in pre-trained large language models (LLMs) such as ChatGPT and GPT-3 to convert free-form natural language directly into code for appropriate visualisations. This paper presents a novel system, Chat2VIS, which takes advantage of the capabilities of LLMs and demonstrates how, with effective prompt engineering, the complex problem of language understanding can be solved more efficiently, resulting in simpler and more accurate end-to-end solutions than prior approaches. Chat2VIS shows that LLMs together with the proposed prompts offer a reliable approach to rendering visualisations from natural language queries, even when queries are highly misspecified and underspecified. This solution also presents a significant reduction in costs for the development of NLI systems, while attaining greater visualisation inference abilities compared to traditional NLP approaches that use hand-crafted grammar rules and tailored models. This study also presents how LLM prompts can be constructed in a way that preserves data security and privacy while being generalisable to different datasets. This work compares the performance of GPT-3, Codex and ChatGPT across several case studies and contrasts the performances with prior studies.