Repository logo
    Info Pages
    Content PolicyCopyright & Access InfoDepositing to MRODeposit LicenseDeposit License SummaryFile FormatsTheses FAQDoctoral Thesis Deposit
    Communities & Collections
    All of MRO
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register using a personal email and password.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Yi X"

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Item
    The Erosion of Cybersecurity Zero-Trust Principles Through Generative AI: A Survey on the Challenges and Future Directions
    (MDPI (Basel, Switzerland), 2025-12-01) Xu D; Gondal I; Yi X; Susnjak T; Watters P; McIntosh TR
    Generative artificial intelligence (AI) and persistent empirical gaps are reshaping the cyber threat landscape faster than Zero-Trust Architecture (ZTA) research can respond. We reviewed 10 recent ZTA surveys and 136 primary studies (2022–2024) and found that 98% provided only partial or no real-world validation, leaving several core controls largely untested. Our critique, therefore, proceeds on two axes: first, mainstream ZTA research is empirically under-powered and operationally unproven; second, generative-AI attacks exploit these very weaknesses, accelerating policy bypass and detection failure. To expose this compounding risk, we contribute the Cyber Fraud Kill Chain (CFKC), a seven-stage attacker model (target identification, preparation, engagement, deception, execution, monetization, and cover-up) that maps specific generative techniques to NIST SP 800-207 components they erode. The CFKC highlights how synthetic identities, context manipulation and adversarial telemetry drive up false-negative rates, extend dwell time, and sidestep audit trails, thereby undermining the Zero-Trust principles of verify explicitly and assume breach. Existing guidance offers no systematic countermeasures for AI-scaled attacks, and that compliance regimes struggle to audit content that AI can mutate on demand. Finally, we outline research directions for adaptive, evidence-driven ZTA, and we argue that incremental extensions of current ZTA that are insufficient; only a generative-AI-aware redesign will sustain defensive parity in the coming threat cycle.

Copyright © Massey University  |  DSpace software copyright © 2002-2026 LYRASIS

  • Contact Us
  • Copyright Take Down Request
  • Massey University Privacy Statement
  • Cookie settings
Repository logo COAR Notify