Developing unbiased artificial intelligence in recruitment and selection : a processual framework : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Management at Massey University, Albany, Auckland, New Zealand
For several generations, scientists have attempted to build enhanced intelligence into computer systems. Recently, progress in developing and implementing Artificial Intelligence (AI) has quickened. AI is now attracting the attention of business and government leaders as a potential way to optimise decisions and performance across all management levels from operational to strategic. One of the business areas where AI is being used widely is the Recruitment and Selection (R&S) process.
However, in spite of this tremendous growth in interest in AI, there is a serious lack of understanding of the potential impact of AI on human life, society and culture. One of the most significant issues is the danger of biases being built into the gathering and analysis of data and subsequent decision-making. Cognitive biases occur in algorithmic models by reflecting the implicit values of the humans involved in defining, coding, collecting, selecting or using data to train the algorithm. The biases can then be self-reinforcing using machine learning, causing AI to engage in ‘biased’ decisions. In order to use AI systems to guide managers in making effective decisions, unbiased AI is required.
This study adopted an exploratory and qualitative research design to explore potential biases in the R&S process and how cognitive biases can be mitigated in the development of AI-Recruitment Systems (AIRS). The classic grounded theory was used to guide the study design, data gathering and analysis. Thirty-nine HR managers and AI developers globally were interviewed.
The findings empirically represent the development process of AIRS, as well as technical and non-technical techniques in each stage of the process to mitigate cognitive biases. The study contributes to the theory of information system design by explaining the phase of retraining that correlates with continuous mutability in developing AI. AI is developed through retraining the machine learning models as part of the development process, which shows the mutability of the system. The learning process over many training cycles improves the algorithms’ accuracy.
This study also extends the knowledge sharing concepts by highlighting the importance of HR managers’ and AI developers’ cross-functional knowledge sharing to mitigate cognitive biases in developing AIRS. Knowledge sharing in developing AIRS can occur in understanding the essential criteria for each job position, preparing datasets for training ML models, testing ML models, and giving feedback, retraining, and improving ML models.
Finally, this study contributes to our understanding of the concept of AI transparency by identifying two known cognitive biases similar-to-me bias and stereotype bias in the R&S process that assist in assessing the ML model outcome. In addition, the AIRS process model provides a good understanding of data collection, data preparation and training and retraining the ML model and indicates the role of HR managers and AI developers to mitigate biases and their accountability for AIRS decisions.
The development process of unbiased AIRS offers significant implications for the human resource field as well as other fields/industries where AI is used today, such as the education system and insurance services, to mitigate cognitive biases in the development process of AI. In addition, this study provides information about the limitations of AI systems and educates human decision makers (i.e. HR managers) to avoid building biases into their systems in the first place.