Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    New user? Click here to register using a personal email and password.Have you forgotten your password?
Repository logo
    Info Pages
    Content PolicyCopyright & Access InfoDepositing to MRODeposit LicenseDeposit License SummaryFile FormatsTheses FAQDoctoral Thesis Deposit
  • Communities & Collections
  • All of MRO
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    New user? Click here to register using a personal email and password.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Soleimani M"

Now showing 1 - 3 of 3
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Cognitive biases in developing biased artificial intelligence recruitment system
    (University of Hawai‘i at Mānoa, 2021-01-01) Soleimani M; Intezari A; Taskin N; Pauleen D; Bui TX
    Artificial Intelligence (AI) in a business context is designed to provide organizations with valuable insight into decision-making and planning. Although AI can help managers make decisions, it may pose unprecedented issues, such as datasets and implicit biases built into algorithms. To assist managers with making unbiased effective decisions, AI needs to be unbiased too. Therefore, it is important to identify biases that may arise in the design and use of AI. One of the areas where AI is increasingly used is the Human Resources recruitment process. This article reports on the preliminary findings of an empirical study answering the question: how do cognitive biases arise in AI? We propose a model to determine people's role in developing AI recruitment systems. Identifying the sources of cognitive biases can provide insight into how to develop unbiased AI. The academic and practical implications of the study are discussed.
  • Loading...
    Thumbnail Image
    Item
    Mitigating cognitive biases in developing AI-assisted recruitment systems: A knowledge-sharing approach
    (IGI Global, 2022) Soleimani M; Intezari A; Pauleen DJ
    Artificial intelligence (AI) is increasingly embedded in business processes, including the human resource (HR) recruitment process. While AI can expedite the recruitment process, evidence from the industry, however, shows that AI-recruitment systems (AIRS) may fail to achieve unbiased decisions about applicants. There are risks of encoding biases in the datasets and algorithms of AI which lead AIRS to replicate and amplify human biases. To develop less biased AIRS, collaboration between HR managers and AI developers for training algorithms and exploring algorithmic biases is vital. Using an exploratory research design, 35 HR managers and AI developers globally were interviewed to understand the role of knowledge sharing during their collaboration in mitigating biases in AIRS. The findings show that knowledge sharing can help to mitigate biases in AIRS by informing data labeling, understanding job functions, and improving the machine learning model. Theoretical contributions and practical implications are suggested.
  • Loading...
    Thumbnail Image
    Item
    Reducing AI bias in recruitment and selection: an integrative grounded approach
    (Taylor and Francis Group, 2025-03-20) Soleimani M; Intezari A; Arrowsmith J; Pauleen DJ; Taskin N
    Artificial Intelligence (AI) is transforming business domains such as operations, marketing, risk, and financial management. However, its integration into Human Resource Management (HRM) poses challenges, particularly in recruitment, where AI influences work dynamics and decision-making. This study, using a grounded theory approach, interviewed 39 HR professionals and AI developers to explore potential biases in AI-Recruitment Systems (AIRS) and identify mitigation techniques. Findings highlight a critical gap: the HR profession’s need to embrace both technical skills and nuanced people-focused competencies to collaborate effectively with AI developers and drive informed discussions on the scope of AI’s role in recruitment and selection. This research integrates Gibson’s direct perception theory and Gregory’s indirect perception theory, combining psychological, information systems, and HRM perspectives to offer insights into decision-making biases in AI. A framework is proposed to clarify decision-making biases and guide the development of robust protocols for AI in HR, with a focus on ethical oversight and regulatory needs. This research contributes to AI-based HR decision-making literature by exploring the intersection of cognitive bias and AI-augmented decisions in recruitment and selection. It offers practical insights for HR professionals and AI developers on how collaboration and perception can improve the fairness and effectiveness of AIRS-aided decisions.

Copyright © Massey University  |  DSpace software copyright © 2002-2025 LYRASIS

  • Contact Us
  • Copyright Take Down Request
  • Massey University Privacy Statement
  • Cookie settings