Cyborg knowledge production with an AI psychologist : tangled threads of gendered harm, ethics, and care amidst a mental health crisis : a thesis presented in partial fulfilment of the requirements for the degree of Master of Arts in Psychology at Massey University, Manawatu, Aotearoa New Zealand

Loading...
Thumbnail Image

Date

2025

DOI

Open Access Location

Journal Title

Journal ISSN

Volume Title

Publisher

Massey University

Rights

The author

Abstract

This thesis explores the use of artificial intelligence (AI) chatbots to provide mental health advice and the potential perpetuation of harmful gendered discourses through the technologisation of care. Situated within the ongoing mental health crisis in Aotearoa New Zealand and the exponential rise of generative AI, this study deals with the unprecedented complexities of operating within an emerging and rapidly evolving research field. Maintaining ethical relational dilemmas with limited institutional guidance and reinforcement of human exceptionalism challenged reflexive partnering with AI chatbots to co-produce knowledge. Donna Haraway's cyborg metaphor guided the methodological and epistemological considerations for the study, contributing to the introduction of critical concepts cyborgphancy (the sycophantic nature of AI chatbots) and cyborg knowledge production to facilitate understanding of this rapidly evolving research area. Semi-structured interviews were conducted with the AI Psychologist chatbot from the Character.ai platform. ChatGPT was utilised as a research assistant and an emic advisor. A threaded narrative analysis embraced the contradictory nature of cyborg knowledge production, weaving together partial and multiple relationships between researcher, AI chatbot, and help-seekers within the reproduction of psychological, gendered and biological essentialist discourses. Findings challenge the illusion of neutrality, interrogating the AI Psychologist's gender-neutral responses as reproduction of androcentric knowledge bases, reinforcing gendered power dynamics and systems of oppression. ChatGPT's emic analysis confirmed the perpetuation of harmful discourses, attributing this to fundamental design features of AI chatbots. This study offers a qualitative feminist post-structural analysis to the emerging practice of engaging with AI chatbots for mental health support. There is substantial potential for harm to be perpetuated by AI within this context, due to the proliferation of AI chatbot usage and the failings of the mental health system to provide support. This risk necessitates greater scrutiny of AI chatbot use for mental health purposes, education of potential harms, and robust safeguards to protect help-seekers.

Description

Keywords

Citation

Endorsement

Review

Supplemented By

Referenced By