Explaining the Challenges of Accountability in Machine Learning Systems Beyond Technical Obstacles

Explaining the Challenges of Accountability in Machine Learning Systems Beyond Technical Obstacles

Copyright: © 2024 |Pages: 28
DOI: 10.4018/979-8-3693-1479-1.ch003
OnDemand:
(Individual Chapters)
Available
$33.75
List Price: $37.50
10% Discount:-$3.75
TOTAL SAVINGS: $3.75

Abstract

The ability to make a note regarding machine learning systems decisions for people is becoming increasingly sought after, particularly in situations where decisions have significant repercussions for those affected and where capability in terms of maintaining is required. To increase comprehension based on referred to as “black box” mechanism, explaining ability is frequently cited as a technical obstacle in the design of ML systems and decision procedures. The quantities that ML systems aim to optimize must be specified by their users. This leads to the revealing of policy trade-offs that may have previously been hidden or implicit. Important decisions, as well as judgments, help what may need to be explicitly discussed in public debate as ML's use in policy expands.
Chapter Preview
Top

2. Types Of Machine Learning (Ml) Algorithms

ML techniques shown in Figure 1, were broadly defined into various categories like:

  • 1.

    Supervised Learning

  • 2.

    Unsupervised Learning

  • 3.

    Reinforcement Learning

Figure 1.

ML classification

979-8-3693-1479-1.ch003.f01

Deep ML models, particularly deep neural networks, help in solving a range of complex applications. The diagnosis of a disease by models using medical images or other medical data is one demonstrative example. At a time, DL mechanisms frequently operate like black-box models, with the operation's specifics frequently remaining largely unknown (Deo, R. C. 2015). In this instance, it’s hard to define how the project for the particular decision succeeded. As a consequence of this, machine learning models have a hard time fitting into a lot of important applications, like medicine, where doctors need to know what a specific diagnosis means to choose the right treatment. Many methods have been developed to define Deep ML algorithms with comprehend the decision-making procedure as shown in Figure 2. These terms are due to the absence of Explaining techniques in various ML models.

Figure 2.

The overall population score for different categories of ML algorithms

979-8-3693-1479-1.ch003.f02

The methodologies for elucidating black-box ML techniques were categorized into 2 divisions namely local approaches, which provide explanations based on individual test cases, and overall mechanisms that aim to clarify broader performance in this particular mechanism. Within these explanations, the contribution of each input feature holds paramount significance. It is a fundamental assumption that a comprehensive explanation assigns a numerical value to each feature, denoting its influence overestimation.

Complete Chapter List

Search this Book:
Reset