Basic Issues and Challenges on Explainable Artificial Intelligence (XAI) in Healthcare Systems

Basic Issues and Challenges on Explainable Artificial Intelligence (XAI) in Healthcare Systems

Oladipo Idowu Dauda, Joseph Bamidele Awotunde, Muyideen AbdulRaheem, Shakirat Aderonke Salihu
DOI: 10.4018/978-1-6684-3791-9.ch011
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Artificial intelligence (AI) studies are progressing at a breakneck pace, with prospective programs in healthcare industries being established. In healthcare, there has been an extensive demonstration of the promise of AI through numerous applications like medical support systems and smart healthcare. Explainable artificial intelligence (XAI) development has been extremely beneficial in this direction. XAI models allow smart healthcare equipped with AI models so that the results generated by AI algorithms can be understood and trusted. Therefore, the goal of this chapter is to discuss the utility of XAI in systems used in healthcare. The issues, as well as difficulties related to the usage of XAI models in the healthcare system, were also discussed. The findings demonstrate some examples of XAI's effective medical practice implementation. The real-world application of XAI models in healthcare will significantly improve users' trust in AI algorithms in healthcare systems.
Chapter Preview
Top

Introduction

The application of artificial intelligence (AI) in mission-critical systems such as healthcare, self-driving automobiles, as well as the military has a direct influence on human life (Awotunde, et al., 2021a; Awotunde, et al., 2021b). The “black box” aspect of AI, on the other hand, makes deployment in mission-critical applications challenging, posing ethical and legal problems and leading to a deficit of confidence (Das & Rad 2020; Abiodun et al., 2021). A subfield of AI, known as Explainable Artificial Intelligence (XAI), supports a collection of instruments, algorithms, and tactics for producing natural, and human-comprehensible interpretations having high quality, for AI results. Other than offering detailed overview of XAI ecosystem using in AI currently, this chapter delves into XAI in the healthcare system.

The current interest in XAI by the researchers give birth to provide the European General Data Protection Regulation (GDPR) the government (AI, 2019), to regulate the data generated from medical captured data. For example, gives insight to the crucial actualization of AI’s trust (Weld & Bansal 2019; Lui & Lamb 2018; Hengstler, Enkel & Duelli 2016), bias (Chen et al., 2019; Challen, et al., 2019; DeCamp & Lindvall 2020), influence of argumentative examples on misleading classifier results (Guo et al., 2019; Su, Vargas & Sakurai 2019), and ethnics (Cath et al., 2018; Etzioni & Etzioni 2017; Bostrom & Yudkowsky 2014; Dignum, 2018). gives insight on the importance of XAI generally. Curiosity, according to the authors in (Miller 2019), is one of the primary reasons why people seek reasons for certain actions. An additional cause may be to make gaining knowledge less difficult to repeat model creation and achieve better results.

Every explanation must be consistent across identical datasets and produce good or comparable interpretations as time goes on (Sokol & Flach 2020). Explanations must make the AI system demonstrative with the aim of promoting comprehension for human beings, confidence in decision-making, and supporting unbiased and just results. As a result, an explanation or interpretable solution for AI systems is required to assure AI decision-making transparency, trust, and fairness. A way of confirming an AI agent`s or algorithm's output decision is known as an explanation.

To describe a cancer detection model based on microscopic pictures, a map of input pixels that make a contribution to the output of the model might be employed. A voice recognition model might be defined by the means of the power spectrum information accessible at a certain moment, which had a bigger effect on the current choice of output. Furthermore, the trained model's variables or activations can be the basis for explanations, which may be conveyed using surrogates such as decision trees, gradients, or other approaches. An explanation of why an agent selected one choice over another might be presented in the context of supervised learning. The concepts of readily understandable and explainable AI, on the other hand, are typically ambiguous and may be misleading (Rudin 2019), and they need incorporate some kind of reasoning (Doran, Schulz & Besold 2017).

Complete Chapter List

Search this Book:
Reset