Repository logo
    Info Pages
    Content PolicyCopyright & Access InfoDepositing to MRODeposit LicenseDeposit License SummaryFile FormatsTheses FAQDoctoral Thesis Deposit
    Communities & Collections
    All of MRO
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register using a personal email and password.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "McIntosh TR"

Filter results by typing the first few letters
Now showing 1 - 5 of 5
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Item
    ChatGPT: The End of Online Exam Integrity?
    (MDPI (Basel, Switzerland), 2024-06-17) Susnjak T; McIntosh TR; Muijs D
    This study addresses the significant challenge posed by the use of Large Language Models (LLMs) such as ChatGPT on the integrity of online examinations, focusing on how these models can undermine academic honesty by demonstrating their latent and advanced reasoning capabilities. An iterative self-reflective strategy was developed for invoking critical thinking and higher-order reasoning in LLMs when responding to complex multimodal exam questions involving both visual and textual data. The proposed strategy was demonstrated and evaluated on real exam questions by subject experts and the performance of ChatGPT (GPT-4) with vision was estimated on an additional dataset of 600 text descriptions of multimodal exam questions. The results indicate that the proposed self-reflective strategy can invoke latent multi-hop reasoning capabilities within LLMs, effectively steering them towards correct answers by integrating critical thinking from each modality into the final response. Meanwhile, ChatGPT demonstrated considerable proficiency in being able to answer multimodal exam questions across 12 subjects. These findings challenge prior assertions about the limitations of LLMs in multimodal reasoning and emphasise the need for robust online exam security measures such as advanced proctoring systems and more sophisticated multimodal exam questions to mitigate potential academic misconduct enabled by AI technologies.
  • Loading...
    Thumbnail Image
    Item
    From COBIT to ISO 42001: Evaluating cybersecurity frameworks for opportunities, risks, and regulatory compliance in commercializing large language models
    (Elsevier B.V., 2024-09-01) McIntosh TR; Susnjak T; Liu T; Watters P; Xu D; Liu D; Nowrozy R; Halgamuge MN
    This study investigated the integration readiness of four predominant cybersecurity Governance, Risk and Compliance (GRC) frameworks – NIST CSF 2.0, COBIT 2019, ISO 27001:2022, and the latest ISO 42001:2023 – for the opportunities, risks, and regulatory compliance when adopting Large Language Models (LLMs), using qualitative content analysis and expert validation. Our analysis, with both LLMs and human experts in the loop, uncovered potential for LLM integration together with inadequacies in LLM risk oversight of those frameworks. Comparative gap analysis has highlighted that the new ISO 42001:2023, specifically designed for Artificial Intelligence (AI) management systems, provided most comprehensive facilitation for LLM opportunities, whereas COBIT 2019 aligned most closely with the European Union AI Act. Nonetheless, our findings suggested that all evaluated frameworks would benefit from enhancements to more effectively and more comprehensively address the multifaceted risks associated with LLMs, indicating a critical and time-sensitive need for their continuous evolution. We propose integrating human-expert-in-the-loop validation processes as crucial for enhancing cybersecurity frameworks to support secure and compliant LLM integration, and discuss implications for the continuous evolution of cybersecurity GRC frameworks to support the secure integration of LLMs.
  • Loading...
    Thumbnail Image
    Item
    From Google Gemini to OpenAI Q* (Q-Star): A Survey on Reshaping the Generative Artificial Intelligence (AI) Research Landscape
    (MDPI (Basel, Switzerland), 2025-02-01) McIntosh TR; Susnjak T; Liu T; Watters P; Xu D; Liu D; Halgamuge MN; Mladenov V
    This comprehensive survey explored the evolving landscape of generative Artificial Intelligence (AI), with a specific focus on the recent technological breakthroughs and the gathering advancements toward possible Artificial General Intelligence (AGI). It critically examined the current state and future trajectory of generative AI, exploring how innovations in developing actionable and multimodal AI agents with the ability scale their “thinking” in solving complex reasoning tasks are reshaping research priorities and applications across various domains, while the survey also offers an impact analysis on the generative AI research taxonomy. This work has assessed the computational challenges, scalability, and real-world implications of these technologies while highlighting their potential in driving significant progress in fields like healthcare, finance, and education. Our study also addressed the emerging academic challenges posed by the proliferation of both AI-themed and AI-generated preprints, examining their impact on the peer-review process and scholarly communication. The study highlighted the importance of incorporating ethical and human-centric methods in AI development, ensuring alignment with societal norms and welfare, and outlined a strategy for future AI research that focuses on a balanced and conscientious use of generative AI as its capabilities continue to scale.
  • Loading...
    Thumbnail Image
    Item
    Modeling the Chaotic Semantic States of Generative Artificial Intelligence (AI): A Quantum Mechanics Analogy Approach
    (Association for Computing Machinery, 2025-12-01) Liu T; McIntosh TR; Susnjak T; Watters P; Halgamuge MN
    Generative artificial intelligence (AI) models have revolutionized intelligent systems by enabling machines to produce human-like content across diverse domains. However, their outputs often exhibit unpredictability due to complex and opaque internal semantic states, posing challenges for reliability in real-world applications. In this paper, we introduce the AI Uncertainty Principle, a novel theoretical framework inspired by quantum mechanics, to model and quantify the inherent unpredictability in generative AI outputs. By drawing parallels with the uncertainty principle and superposition, we formalize the trade-off between the precision of internal semantic states and output variability. Through comprehensive experiments involving state-of-the-art models and a variety of prompt designs, we analyze how factors such as specificity, complexity, tone, and style influence model behavior. Our results demonstrate that carefully engineered prompts can significantly enhance output predictability and consistency, while excessive complexity or irrelevant information can increase uncertainty. We also show that ensemble techniques, such as Sigma-weighted aggregation across models and prompt variations, effectively improve reliability. Our findings have profound implications for the development of intelligent systems, emphasizing the critical role of prompt engineering and theoretical modeling in creating AI technologies that perceive, reason, and act predictably in the real world.
  • Loading...
    Thumbnail Image
    Item
    The Erosion of Cybersecurity Zero-Trust Principles Through Generative AI: A Survey on the Challenges and Future Directions
    (MDPI (Basel, Switzerland), 2025-12-01) Xu D; Gondal I; Yi X; Susnjak T; Watters P; McIntosh TR
    Generative artificial intelligence (AI) and persistent empirical gaps are reshaping the cyber threat landscape faster than Zero-Trust Architecture (ZTA) research can respond. We reviewed 10 recent ZTA surveys and 136 primary studies (2022–2024) and found that 98% provided only partial or no real-world validation, leaving several core controls largely untested. Our critique, therefore, proceeds on two axes: first, mainstream ZTA research is empirically under-powered and operationally unproven; second, generative-AI attacks exploit these very weaknesses, accelerating policy bypass and detection failure. To expose this compounding risk, we contribute the Cyber Fraud Kill Chain (CFKC), a seven-stage attacker model (target identification, preparation, engagement, deception, execution, monetization, and cover-up) that maps specific generative techniques to NIST SP 800-207 components they erode. The CFKC highlights how synthetic identities, context manipulation and adversarial telemetry drive up false-negative rates, extend dwell time, and sidestep audit trails, thereby undermining the Zero-Trust principles of verify explicitly and assume breach. Existing guidance offers no systematic countermeasures for AI-scaled attacks, and that compliance regimes struggle to audit content that AI can mutate on demand. Finally, we outline research directions for adaptive, evidence-driven ZTA, and we argue that incremental extensions of current ZTA that are insufficient; only a generative-AI-aware redesign will sustain defensive parity in the coming threat cycle.

Copyright © Massey University  |  DSpace software copyright © 2002-2026 LYRASIS

  • Contact Us
  • Copyright Take Down Request
  • Massey University Privacy Statement
  • Cookie settings
Repository logo COAR Notify