Explainable AI in Healthcare: A Multi-Disciplinary Perspective

Explainable AI in Healthcare: A Multi-Disciplinary Perspective

Shantha Visalakshi Upendran
DOI: 10.4018/979-8-3693-5468-1.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

With the advent of machine learning (ML)-based tools in the healthcare domain, various treatment methodologies like digital healthcare (HC) by integrating cross domain fusion from cross-modality imaging and non-imaging of health data and personalized treatments have been recommended to improve the overall efficacy of the healthcare systems. Due to the intensive need of skilled physicians to combat with the as the extraneous strength, the advantages of ML approaches include a larger range of functionalities such as filtering emails, identifying objects in images and analysing large volumes of complex interrelated data. It is observed that the massive amounts of healthcare data which have been generated everyday within electronic health records. In turn, the healthcare providers take a more predictive approach to come out with a more unified system which concentrates on clinical decision support, clinical practice development guidelines, and automated healthcare systems, thereby offering a broad range of features in precise manner such as improving patient data for better diagnosis, medical research for future references. This chapter provides a complete overview of a typical ML workflow comprises the predominant phases, namely data collection, data pre-processing, modelling, training, evaluation, tuning, and deployment, and the role of explainable artificial intelligence (XAI) mechanisms assists to integrate interoperability and explainability into the ML workflow. In general, XAI can be defined as the set of processes and methods that produces details or comprehensive justifications pertaining to the functioning of the model or easy to understand and trust the potential outcomes generated by ML techniques. The ultimate aim lies in explaining the interaction to the end user leads to a trustworthy environment. In addition to that, XAI assures the privileges with regard to the healthcare domain are dimension reduction, feature importance, attention mechanism, knowledge distillation, surrogate representations used to develop and validate a decision supporting tool using XAI. The positive growth of XAI nuanced the wider usage of aggregated, personalized health data to generate with ML models for diagnosis automation, prompt, and precise way of tailoring therapies with optimality and in a dynamic manner. XAI mechanisms ensure better decision making by letting the end-user know how the ML model derived the potential outcomes and medical results.
Chapter Preview
Top

Introduction

Explainability can be considered as a characteristic of an AI-driven system allowing a user to redefine or reconstruct why a certain AI came-up with the predictions generated. Interdisciplinary nature of explainability and its inferences for the future of healthcare and even the current real-time. Explainable AI (XAI) systems are equipped for the ability of self-explanatory reasoning behind any AI driven systems’ decisions and predictions. With the able support of the personal digital health records, it is possible to predict the future illness and to diagnose the critical diseases by analysing the electronic health data. The digital health records comprise both structured and unstructured data such as the medical diagnosis reports in the digitized form of text, imaging, signal data, sensor-based data taken from wearables etc., Explainability is a multi-faceted concept that combines the technical aspects of the proposed solutions like baseline models, XAI techniques, data types; and the explainability features such as type, scope and presentation; has to assure the medical practitioner’s perspective.

Figure 1.

XAI for Healthcare: Multi-Disciplinary Perspective

979-8-3693-5468-1.ch004.f01

AI based algorithms and the Natural Language Processing and the recent generative AI tools can be made use of creating human-readable reports which in turn helps the clinicians to come out with the accurate results. The increased interoperability, comprehensive reports along with reasonable and data driven decisions through XAI leads to Evidence based medication thereby ensuring better medical care to the needy (Adadi, 2020). The role of XAI in Clinical Decision Support Systems (CDSS) kept on interacting with the existing Knowledge Base with regard to the healthcare domain to narrow down the root cause and the subsequent diagnosis of clinical procedures. The significance of XAI has to be perceived in the perspectives of the following as depicted in Fig 1 namely, Medical, Patient and Legal and Technological contributions incorporated to make the proposed models more evident (Amann, 2020).

Complete Chapter List

Search this Book:
Reset