Journal Articles
Permanent URI for this collectionhttps://mro.massey.ac.nz/handle/10179/7915
Browse
2 results
Search Results
Item Reducing AI bias in recruitment and selection: an integrative grounded approach(Taylor and Francis Group, 2025-03-20) Soleimani M; Intezari A; Arrowsmith J; Pauleen DJ; Taskin NArtificial Intelligence (AI) is transforming business domains such as operations, marketing, risk, and financial management. However, its integration into Human Resource Management (HRM) poses challenges, particularly in recruitment, where AI influences work dynamics and decision-making. This study, using a grounded theory approach, interviewed 39 HR professionals and AI developers to explore potential biases in AI-Recruitment Systems (AIRS) and identify mitigation techniques. Findings highlight a critical gap: the HR profession’s need to embrace both technical skills and nuanced people-focused competencies to collaborate effectively with AI developers and drive informed discussions on the scope of AI’s role in recruitment and selection. This research integrates Gibson’s direct perception theory and Gregory’s indirect perception theory, combining psychological, information systems, and HRM perspectives to offer insights into decision-making biases in AI. A framework is proposed to clarify decision-making biases and guide the development of robust protocols for AI in HR, with a focus on ethical oversight and regulatory needs. This research contributes to AI-based HR decision-making literature by exploring the intersection of cognitive bias and AI-augmented decisions in recruitment and selection. It offers practical insights for HR professionals and AI developers on how collaboration and perception can improve the fairness and effectiveness of AIRS-aided decisions.Item Mitigating cognitive biases in developing AI-assisted recruitment systems: A knowledge-sharing approach(IGI Global, 2022) Soleimani M; Intezari A; Pauleen DJArtificial intelligence (AI) is increasingly embedded in business processes, including the human resource (HR) recruitment process. While AI can expedite the recruitment process, evidence from the industry, however, shows that AI-recruitment systems (AIRS) may fail to achieve unbiased decisions about applicants. There are risks of encoding biases in the datasets and algorithms of AI which lead AIRS to replicate and amplify human biases. To develop less biased AIRS, collaboration between HR managers and AI developers for training algorithms and exploring algorithmic biases is vital. Using an exploratory research design, 35 HR managers and AI developers globally were interviewed to understand the role of knowledge sharing during their collaboration in mitigating biases in AIRS. The findings show that knowledge sharing can help to mitigate biases in AIRS by informing data labeling, understanding job functions, and improving the machine learning model. Theoretical contributions and practical implications are suggested.
