Unveiling the Depths of Explainable AI: A Comprehensive Review

Unveiling the Depths of Explainable AI: A Comprehensive Review

Wasim Khan, Mohammad Ishrat
DOI: 10.4018/979-8-3693-0968-1.ch004
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Explainable AI (XAI) has become increasingly important in the fast-evolving field of AI and ML. The complexity and obscurity of AI, especially in the context of deep learning, provide unique issues that are explored in this chapter. While deep learning has shown impressive performance, it has been criticised for its opaque reasoning. The fundamental motivation behind this research was to compile a comprehensive and cutting-edge survey of XAI methods applicable to a wide variety of fields. This review is achieved through a meticulous examination and analysis of the various methodologies and techniques employed in XAI, along with their ramifications within specific application contexts. In addition to highlighting the existing state of XAI, the authors recognize the imperative for continuous advancement by delving into a meticulous examination of the limitations inherent in current methods. Furthermore, they offer a succinct glimpse into the future trajectory of XAI research, emphasizing emerging avenues and promising directions poised for significant progress.
Chapter Preview
Top

1. Introduction

A substantial portion of the credit for the birth of revolutionary technologies, such as the Internet of Things (IoT), autonomous vehicles, augmented and virtual reality, can be attributed to the advancement of communication systems. Similarly, the concept of intelligent 'objects' has heralded a new era of innovation in the domain of applications and services, which has profoundly and positively impacted our daily lives. These applications generate vast amounts of high-dimensional and heterogeneous data, necessitating effective strategies for data mining and insight extraction (Khan & Haroon, 2022b). Depending on the distance and duration of travel, an autonomous vehicle can generate anywhere from five to twenty terabytes of data daily. Such data, required for monitoring, prediction, decision-making, and control (e.g., for autonomous navigation), often demands real-time or near-real-time processing. In these instances, data analytics tools like Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) are typically employed (Tien, 2017).

Mathematical models serve as the foundation of machine learning (ML), a subset of artificial intelligence. ML employs these models to make accurate predictions for individual test samples, without the need for explicit programming. These models are constructed and trained on specific datasets. Deep learning, the predominant subfield of ML today, strives to replicate the decision-making process of the human brain by analysing extensive datasets and identifying patterns (Pramod et al., 2021). Deep neural networks, a key component of deep learning, find increasing applications in various domains, including computer vision (CV), natural language processing (NLP), and the Internet of Things (IoT). Despite deep learning's exceptional performance relative to conventional ML algorithms and its industry-leading achievements, it occasionally faces criticism for being opaque and inscrutable. This criticism arises from its inability to articulate the rationale behind a particular decision (Khan & Haroon, 2022c).

Applications based on traditional algorithms often struggle to gain widespread acceptance due to their lack of openness, adaptability, and reliability, particularly when critical decisions are at stake. In many contexts, providing context for an answer is crucial for enhancing credibility and transparency. In fields like medicine, where professionals must have high confidence in their findings, questions may arise about how AI arrived at a diagnosis based on a CT scan (Lebovitz et al., 2022). Even the most advanced AI systems have their limitations, and understanding the reasoning behind a diagnosis is vital for two primary reasons: building trust and reducing the risk of potentially life-threatening errors. In different contexts, such as law and order, answers to other “wh” questions (e.g., “why,” “when,” “where”) may also be necessary. Traditional AI is ill-equipped to handle “wh” queries like these. Hence, there is a growing demand for a new generation of interpretable models that can match the performance of state-of-the-art models (Tennyson, 2013). These interpretable models offer additional transparency, which can enhance the practicality of AI systems in three essential ways:

  • Ensuring Fairness: Interpretable models can identify and rectify biases in training data, promoting fairness in the learning process. This helps in eliminating unfair discrimination and ensuring equitable outcomes.

  • Enhancing Robustness: By highlighting potential sources of noise that may affect performance, interpretable models improve the system's resilience. This can lead to more reliable and consistent results, even in the presence of uncertainties.

  • Feature Selection: Interpretable models enable a clear understanding of the essential features that influence a model's output. This can lead to more efficient and effective decision-making by focusing on the most relevant information.

Complete Chapter List

Search this Book:
Reset