Methods, Techniques, and Application of Explainable Artificial Intelligence

Methods, Techniques, and Application of Explainable Artificial Intelligence

Ankur Dumka, Vaibhav Chaudhari, Anil Kumar Bisht, Ruchira Rawat, Arnav Pandey
Copyright: © 2024 |Pages: 18
DOI: 10.4018/979-8-3693-2351-9.ch017
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

With advancement in machine learning, the use of machine learning has been increased, and explainable artificial intelligence (XAI) has emerged as an area of research and development for addressing the opacity and complexity of machine learning models. This chapter has proposed the overview of the current state of explainable artificial intelligence with highlighting its significance, disadvantages, and its potential applications in different fields. This chapter explores several explainable artificial techniques ranging from post-hoc methods like SHAP, LIME to decision tree and rule-based systems. This chapter also focusses on complexity and interpretability of a model.
Chapter Preview
Top

Introduction

Artificial intelligence can be explained as simulation of intelligence of humans in machine which are programmed in such a manner that they can learn and think like a human. Artificial intelligence helps in enabling machine to perform task by developing algorithms and thus avoid the use of human intelligence. The various task involved in artificial intelligence are problem solving, perception, understanding of languages, learning and decision making.

Artificial intelligence can be categorized into following two types as:

  • 1.

    Narrow AI or Weak AI

  • 2.

    General AI or Strong AI

Narrow AI: These AI techniques are developed to perform some specific task or a narrow range of tasks just like recognition of image, translation of language etc.

General AI: These AI techniques are having ability of learning, understanding and applying knowledge to borader range of tasks similar like human intelligence.

Machine learning is a sub-set of artificial intelligence which trained algorithms on large datasets in order to recognize patterns and make predictions. Deep learning is also sub-set of artificial intelligence which involves artificial intelligence neural network which are inspired by function of human brain. AI can be used in different fields like healthcare, finance, robotics, natural language etc.

AI can be divided into following types as:

Figure 1.

Categorization of AI

979-8-3693-2351-9.ch017.f01

Figure 1 categorizes AI into three parts as:

  • 1.

    Symbolic AI

  • 2.

    Statistical AI

  • 3.

    Explainable AI

Symbolic AI: symbolic AI models are explainable by design. They focus on encoding knowledge and reasoning rules to perform task.

Statistical AI: these AI relies on learning from data to make prediction or decisions.

Explainable AI: Explainable AI models are models with inner logic that can clearly be described in human language.

Introduction to Explainable Artificial Intelligence

How you can know if you can trust the result of an AI model?

Suppose developer developed a new AI model called “fraud detection”. This model will have an input layer, some hidden layer and an output layer which are all connected together. This model will analyze all your transactions and this developed model has flagged one of your transactions for a purchase of Rs. 100 coffee shop as potential fraudulent as people can be fraudulent sometimes.

Now, how confident can programmer be that his developed AI model is probably right & that this transaction should be denied or investigated further? As neither programmer knows nor any person knows anything about AI model developed. This AI model is termed as blackbox.

So, this means that when it comes to application of AI not even the engineers or data scientist who are creator of algorithms can fully having complete knowledge and understanding of what is happening inside the developed model for specific instance and results.

The solution to this problem is provided by explainable AI or XAI. Thus, Explainable artificial intelligence (XAI) is defined as collection of well-defined methods and processes which allows users in understanding and trusting the output obtained from well choosen machine learning algorithms depending upon problem statements which are used in describing AI model, their anticipated impacts and potential biases.

Contribution of XAI for Environment

XAI is a technique that is applicable in most of the machine learning and deep learning based approaches. The application of machine learning in environmental hazard detection like detection of fire breakout, pollution, degradation etc. can be further modified using XAI. XAI plays a crucial role for environmental parameters and factors affecting environment.

The model which trains using machine learning approach for fire break out in forest or detecting pollution in the environment using real time approach may use XAI to understand the process or approach taken by the model for real time or early detection of these hazards.

Complete Chapter List

Search this Book:
Reset