Article Preview
Top1. Introduction
The world is witnessing groundbreaking changes emerging from the application of artificial intelligence (AI). AI has revolutionized many sectors, including healthcare, education, retail, finance, insurance, and law enforcement and becoming increasingly adopted due to its ability to perform complex tasks which are comparable to humans. It is expected that companies will spend around $98 billion on AI in 2023 globally (International Data Corporation, 2019). This makes sense as AI solves critical business issues helping organizations to become more efficient, gaining competitive advantage while also saving on operational costs (Davenport & Ronanki, 2018; Oana, Cosmin, & Valentin, 2017; Rai, 2020). However, the use of AI is not without limitations.
With the increasing popularity of automating and enhancing business processes with AI, many scholars and practitioners have voiced their concerns regarding the dark sides of AI. Especially concerns over fairness and algorithm bias have increased (Wang, Harper, & Zhu, 2020). Algorithm bias occurs when AI produces systematically unfair outcomes that can arbitrarily put a particular individual or group at an advantage or disadvantage over another (Gupta & Krishnan, 2020; Sen, Dasgupta, & Gupta, 2020). This is an outcome occurring mainly from working with unrepresentative datasets or issues in algorithm design and particularly affects underrepresented minority groups (Gupta & Krishnan, 2020; Mullainathan & Obermeyer, 2017; Obermeyer, Powers, Vogeli, & Mullainathan, 2019). Recently there were many cases that showcased gender, racial and socio-economic biases emanating from AI applications. Some of these include several facial recognition systems, for example, Amazon’s AI-based “Rekognition” software, discriminating against darker-skinned individuals and also providing unreliable results in identifying females; Google's AI hate speech detector was found providing racially biased outcomes; Google was showing fewer ads to females compared to males in the recruitment of high paying jobs; Amazon also abandoned an algorithmic human resources recruitment system for reviewing and ranking applicants’ resumes since it was biased against women; a racial bias in a medical algorithm developed by Optum was found to favor white patients over sicker black patients; and the robodebt scheme in Australia wrongly and unlawfully pursued hundreds of thousands of welfare clients for the debt they did not owe (Blier, 2019; Hunter, 2020; Johnson, 2019; Martin, 2019).
The impact of algorithm bias can be devastating, asymmetric and oppressive, with individuals discriminated against and businesses negatively impacted. Despite the increasing understanding of algorithm bias and its effects, overall research in this stream lacks a systematic discussion of how it can affect service systems and how we can address algorithm-bias in data-driven decision making. Therefore, this paper responds to the question: ‘how to address algorithm bias in AI-driven customer management?’ The main objectives of the current study are: 1) to review and analyze the algorithm bias in customer management; 2) to synthesize the systematic literature review findings into a decision-making framework, and 3) to provide future research directions as per the knowledge gap. The systematic literature review in the emerging topic of algorithm bias contributes to AI literature mainly by providing a clear picture of the determinants of algorithm bias and its effects on customer management. Also, this study uniquely contributes to the theory by presenting a theoretical framework that identifies four consistency measures and six post-hoc measures to address algorithm bias in customer management. Further, this study is important as it contributes to the debate of responsible innovation and ethical AI (Ghallab, 2019; Gupta and Krishnan, 2020; Rakova et al. 2020) by scrutinizing the key ethical challenge of algorithm bias in AI applications.