A Look at Cognitive Biases Violating Utilitarianism

A Look at Cognitive Biases Violating Utilitarianism

DOI: 10.4018/979-8-3693-1766-2.ch003
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

We make decisions and our decisions affect others as well as ourselves. However, we cannot always think in a completely rational way. Therefore, studies relating to understanding biases affecting decisions and eliminating negative effects of these biases can help us to improve our decisions. In this chapter, omission bias, identifiable victim effect, punishment without deterrence, ex-ante equality, and parochialism, which are considered deviations from utilitarianism, are examined. The reasons for the occurrence of them and some possible effects of these biases on the decisions taken and policies created are discussed. Ways to improve rational thinking and to reduce the negative effects of these biases are mentioned.
Chapter Preview
Top

Introduction

In this chapter, some cognitive biases that are considered deviations from the norm of utilitarianism are addressed. The reasons for the occurrence of them and possible effects of these biases on the decisions taken and policies created will be discussed. Studies about reducing the negative effects of these biases are examined. Therefore, at this point, it would be worthwhile to briefly touch upon some basic concepts.

A good decision is made by effectively using the information at decision-maker’s disposal at the time the decision is made. The outcome that the decision-maker likes is a good outcome. We can use good methods and get a bad outcome, on the other hand, we can use poor methods and achieve a good outcome by chance. Just as accuracy is not the same as rationality, error is not the same as irrationality either. Rationality can be seen as a question of degree. Moreover, there may not be only one way of thinking that is best (Baron, 2008).

A rational decision-maker does not hesitate to make self-criticism, is open to alternatives, has tolerance of doubt, and is purposeful. If one of these features is missing, it can be stated that the person is irrational and deviates from some norms (Baron, 1985). It is presumed that a rational decision-maker completely defines the problem, identifies the criteria and accurately weighs all of them, knows all related alternatives, evaluates each alternative according to each criterion, and calculates the best decision (Bazerman & Moore, 2013). A rational decision-maker evaluates the probability of each possible outcome, knows the utility that each of them will bring, and combines these two evaluations to choose the option that is the optimal combination of probability and utility (Gilovich & Griffin, 2002). Probability and utility are difficult to calculate, but rational choice theory assumes that people can make these calculations. Defenders of the theory do not assert insistently that people never make mistakes when they make these calculations. However, they insist that the mistakes are not systematic (Gilovich & Griffin, 2002).

We make decisions all the time. Our decisions affect others as well as ourselves. For instance, managers or strategists make decisions so that business can survive and take competitive advantage. We expect carefully considered rational decisions from managers on many issues such as recruitment, promotion, and dismissal. Likewise, we expect policymakers to create rational public policies because these policies affect institutions and individuals by determining decisions like action plans and distribution of resources in many areas such as education, public health, law, and environmental protection. Because “economic man” of traditional economic theory points out a rational person who is informed about the relating aspects of his environment, has a stable system of preferences and computational skill, and is infinitely sensitive (Edwards, 1954; Simon, 1955). In neoclassical economics, rationality is generally expressed as consistency, expected utility maximization, and updating of probabilities in a Bayesian manner (Gigerenzer, 2021).

Expected utility theory can be used in decision-making under uncertainty. Expected utility principle is commonly attributed to Daniel Bernoulli (Schmeidler & Wakker, 1990; Tversky, 1975). For the first time, von Neumann and Morgenstern (1944/1953) axiomatized the theory, and Savage (1954/1972) contributed to its development (Tversky, 1975).

The expected value of a game is calculated by multiplying the probability of each outcome by the value of that outcome and then adding up all results (Baron, 2008). Now, let's think about the game called the St. Petersburg Paradox:

Key Terms in this Chapter

Punishment Without Deterrence: Utilitarians punish bad behaviors in order to deter or prevent future bad and undesirable behaviors. However, it can be seen that people often ignore deterrence in their decisions about penalty or punishment ( Baron, 2008 ).

Ex-Ante Equality: Ex-ante equality asserts that when risk is defined by the probability of death of individuals, an equal distribution of risk is preferred to an unequal distribution ( Keller & Sarin, 1988 ).

Identifiable Victim Effect: Identifiable victim effect, in short, refers to individuals’ greater willingness to help and spend money or resources for identified or specific victims rather than anonymous, unidentified, or statistical victims ( Genevsky et al. 2013 ; Jenni & Loewenstein, 1997 ; Klusek, 2018 ; Kogut & Kogut, 2013 ; Lee & Feeley, 2016 , 2018 ; Perrault et al., 2015 ).

Bias: Biases can be expressed as systematic errors ( Kahneman, 2011 ) or systematic deviations from a normative model ( Baron, 1985 , 2008 , 2012a , 2014a ; Caverni et al., 1990 ).

Omission Bias: Omission bias refers to the tendency to judge harmful actions as worse and more immoral than equally harmful or even more harmful omissions ( Baron, 2008 , 2014b ; Baron & Ritov, 1994 , 2004 ; Ritov & Baron, 1990 ; Spranca et al., 1991 ).

Descriptive Model: Descriptive models attempt to explain how people actually think and how they deviate from normative models ( Baron, 1985 , 2004 , 2006a , 2008 , 2014a ; Over, 2004 ).

Normative Model: Normative models express standards that tell us how we should ideally reason and make decisions ( Baron, 1985 , 2004 , 2006a , 2008 , 2014a ; Over, 2004 ).

Prescriptive Model: Prescriptive models seek answers to the question of what we can do to improve our thinking and decision-making and refer to models that will bring us closer to normative ideals ( Baron, 1985 , 2004 , 2006a , 2008 , 2014a ; Over, 2004 ).

Parochialism: When individuals ignore harm to outsiders and favor the benefit of the group that includes themselves at the expense of outsiders and even their own self-interests, it is called parochialism ( Baron, 2001 , 2006b , 2008 , 2012b ; Baron & Szymanska, 2011 ; Baron et al., 2006 ; Schwartz-Shea & Simmons, 1991 ).

Heuristic: A heuristic is a simple procedure which enables to find adequate but frequently imperfect answers to difficult questions ( Kahneman, 2011 ).

Complete Chapter List

Search this Book:
Reset