Bias in Data-Informed Decision Making

Bias in Data-Informed Decision Making

Harini Dissanayake, Paul J. Bracewell
Copyright: © 2023 |Pages: 14
DOI: 10.4018/978-1-7998-9220-5.ch068
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

With vast amounts of data becoming available in a machine-readable format, decision makers in almost every sector are focused on exploiting data and machine learning to drive the phenomena of automated decision making, whilst rapidly dissolving the human oversight in the process. The description of bias in decision making arising from machine learning outlined in this article sets to demonstrate the scale of the issue and the value of transparency in decision making that affects the daily life of ordinary humans. Fully automated solutions without appropriate governance can be problematic due to the replication of human biases which can be captured and imposed upon the training process of a machine learning models. Decision makers must be aware of limitations within the data and proceed with caution. As such, transparency in how data is used to make decisions is vital. Despite the ever-increasing reliance on algorithms in decision making, human oversight is critical. This article serves to raise awareness of this aspect of machine learning.
Chapter Preview
Top

Introduction

With vast amounts of data becoming available in a machine-readable format, decision makers in almost every sector are focused on exploiting data and machine learning to drive the phenomena of automated decision making, whilst rapidly dissolving the human oversight in the process. The description of bias in decision making arising from machine learning outlined in this chapter sets to demonstrate the scale of the issue and the value of transparency in decision making that affects the daily life of ordinary humans.

As a practical example, in New Zealand, the extent of algorithm use has already extended to use cases such as a New Zealand Police tool that assesses family violence risk, and a Ministry of Social Development classification tool for identifying school leavers that require the greatest education and employment support. The concern raised by many of the government bodies that adopt operational algorithms is the potential bias in such modelling, with identifying and improving these potential biases a priority in the future (Statistics New Zealand, 2018). This chapter identifies such biases in both government and non-government decision making data, with a special focus in identifying discriminatory bias of operational algorithms and sample selection bias inherent in decision making data.

Behavioral research finds that human decision making is clouded by ‘human errors’. In various contexts, the literature documents that human problem solving tends to systematically differ from the predictions of rational choice models. From the past 50 years of work in behavioral research and decision making, it is well established that human decision making is often not as rational as one might expect. The development of ‘data science’ allows decision makers to detect present day human errors impacting specific decisions in real time. One of the objectives of this chapter is to reveal the extent of human biases found in the decision-making information in present day New Zealand.

Machine learning systems are becoming increasingly prominent in automated decision making in New Zealand. Using systems that are sensitive to the type of bias that results in discrimination, must be undertaken with caution. Given the scale and impact of the bodies that have already adopted machine learning, it is crucial that measures are taken to prevent unfair discrimination through legal as well as technical means. There has been significant effort to avoid and correct discriminatory bias in algorithms while also making them more transparent. In New Zealand, the government recently claimed world first in setting standards for how public agencies should use the algorithms that increasingly drive decision making by officials about every aspect of public life. New Zealand has produced a set of standards; “Algorithm charter for Aotearoa New Zealand”, designed to guide government use of algorithms and to improve data transparency and accountability. It outlines several measures including:

  • 1.

    how decisions are informed by algorithms:

  • 2.

    making sure data is fit for purpose by identifying and managing bias

  • 3.

    ensuring that privacy, ethics, and human rights are safeguarded by regularly peer-reviewing algorithms

  • 4.

    explaining the role of humans in decisions informed by algorithms, and

  • 5.

    providing a channel for citizens to appeal against decisions informed by algorithms.

Key Terms in this Chapter

Sample Selection Bias: Is when the sample does not represent the target population due to the sample not being randomly selected.

Operational Algorithm: Algorithms that are in use and have an impact on some aspect of everyday life.

Automated: Is where a system minimizes human intervention for operations to be employed automatically.

Discriminatory Bias: A result where, based on patterns in the training data, an algorithm learns to employ prejudicial treatment.

Missing Not at Random: Is when the values of a variable are related to the tendency for the value to be missing.

Decision Making: A process involving gathering information and assessing alternative solutions to arrive at the optimal choice.

Human Error: Is an individual’s mistake due to limitations of human ability, rather than an external failure such as a machine.

Complete Chapter List

Search this Book:
Reset